Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 75

Liczba wyników na stronie
first rewind previous Strona / 4 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  models
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 4 next fast forward last
EN
Sensitivity analysis is used to find the key variables which have significant effect on system reliability. For a product in early design stage, it is impossible to collect sufficient samples. Thus, the probabilistic-based reliability sensitivity analysis methods are difficult to use due to the requirement of probability distribution. As an alternative, interval can be used because it only requires few samples. In this study, an effective global non-probabilistic sensitivity analysis based on adaptive Kriging model is proposed. The global accuracy Kriging model is constructed to reduce overall computational cost. Subsequently, the global non-probabilistic sensitivity analysis method is developed. Compared to existing non-probabilistic sensitivity analysis methods, the proposed method is a global non-probabilistic reliability sensitivity analysis method. The proposed method is easy to use and does not require probability distribution of the input variables. The applicability of proposed method is demonstrated via two examples.
EN
Rainfall-induced progressive soil erosion of compacted surface layer (SL) impedes the functioning of cover system (CS) of landfills with high expected design life (≈ 100 years). The existing soil erosion models are not tested extensively for compacted soil with cracks and vegetation. This study evaluated the efficacy of three popular soil erosion models for estimating the soil loss of compacted SL of CS, which is useful for annual maintenance. The interactive effect of rainfall, vegetation and desiccation cracks on erosion of compacted surface layer was investigated under the influence of both natural and simulated rainfall events for one year. Among all, the Morgan, Morgan and Finney (MMF) model was found to be effective in predicting soil erosion of compacted SL. However, the MMF model overestimated soil erosion when the vegetation cover exceeded 60%. The soil loss estimated from Revised Universal Soil Loss Equation (RUSLE) and Water Erosion Prediction Project (WEPP) models was poor for high rainfall intensity (100 mm/h). The RUSLE and WEPP model overestimated the soil erosion for low vegetation cover (≤3%) and underestimated for vegetation area>3%. The mechanism of root reinforcement, strength due to root water uptake-induced soil suction and its effect on soil loss mitigation could not be adequately captured by the existing models for compacted SL. Further studies are needed to improve the existing erosion models for incorporating the effects of desiccation and vegetation on soil loss from the compacted SL.
PL
Konstrukcje obiektów zabytkowych mają najczęściej formę elementów murowanych. Najsłabszym ich składnikiem jest zaprawa murarska, zwłaszcza w kamiennych fundamentach, często wykonywanych w przeszłości. Celem artykułu jest analiza zachowania się fundamentu, skupiona na modelowaniu zaprawy jako spoiny łączącej kamienie w charakterystycznym średniowiecznym fundamencie. Analizę numeryczną przeprowadzono na przykładzie zlokalizowanego w Polsce obiektu rzeczywistego, w którym fundament wymagał wzmocnienia ze względu na zły stan techniczny i planowany wzrost obciążenia przekazywanego na fundament. Pod uwagę brane są różne rodzaje zapraw, w tym cementowe, cementowo-wapienne, wapienne i gipsowe. Wyniki mogą świadczyć o przydatności i zaletach tego podejścia do fundamentów zabytkowych budynków, a także innych elementów konstrukcji murowych.
EN
Structures of historical buildings usually have the form of masonry elements. The weakest component of such elements is mortar, especially in stone foundations, which were often used in the with a focus on the modeling of mortar as a joint connecting stones in a characteristic medieval foundation. Different types of mortar: cement, cement-lime, lime and gypsum mortars, were examined. A numerical analysis was carried out on the example of an existing structure located in Poland, where the foundation needed reinforcement due to its poor condition and planned load increase. The obtained results may provide some evidence for the usefulness and advantages of this approach to dealing with foundations of historical buildings, as well as some other elements of old masonry structures.
PL
Analiza planimetryczna jest dobrze znaną i powszechnie stosowaną metodą służącą do określania udziału danego składnika na obszarze analizowanej powierzchni preparatu. W geologii wykorzystuje się ją do ustalenia zarówno składu mineralnego, jak i macerałowego, a otrzymane wyniki są pomocne w rozwiązywaniu szerokiego spektrum problemów badawczych. Celem niniejszej pracy było sprawdzenie, w jaki sposób uzyskany wynik analizy planimetrycznej zależy od przyjętej gęstości siatki pomiarowej, co pozwoli na jej dostosowanie do konkretnych sytuacji spotykanych w próbkach geologicznych i na optymalizację stosunku dokładności wyniku do czasu analizy. Badania podzielono na dwa etapy – w pierwszym pracowano na modelach, w drugim oparto się na próbkach. W etapie pierwszym stworzono wirtualną sieć punktów pomiarowych o wymiarach 100 na 100 punktów, co łącznie dawało 10 000 punktów. Po stworzeniu modelu siatki pomiarowej ustalono 102 scenariusze, różniące się zawartością składnika A, poddawanego analizie. Była to zawartość na poziomie 0,1%, 0,5% oraz od 1% do 100%. W każdym z przyjętych scenariuszy przeprowadzono 100 losowań, tak aby układ punktów na siatce pomiarowej odpowiadający składnikowi A był przypadkowy. Następnie stopniowo zmniejszano liczbę punktów pomiarowych z pierwotnych 10 000 do 100 i każdorazowo obliczano zmodyfikowany udział składnika A. Uzyskane wartości średniego udziału składnika A w większości przypadków jedynie nieznacznie różniły się od pierwotnie założonego udziału, czego nie można powiedzieć o wartościach minimalnych i maksymalnych – tu obserwowano zmienność w szerszym przedziale. W etapie drugim wykorzystano 3 fragmenty węgla o różnym stopniu skomplikowania składu macerałowego, na które narzucono siatkę pomiarową o wymiarach 100 na 100 punktów. Pierwotną gęstość siatki stopniowo zmniejszano (do 144 punktów) i każdorazowo obliczano skład macerałowy. Wyniki uzyskane zarówno z modeli, jak też z próbek wskazują na wyraźny trend zmniejszenia dokładności wraz ze zmniejszeniem gęstości punktów pomiarowych. Uzyskane wyniki przeanalizowano również pod kątem ustalonych kryteriów akceptacji, za które przyjęto zawartość składnika różniącą się od jego udziału opartego na siatce 10 000 punktów o maksymalnie 5%, 10% lub 30% wartości oryginalnej. Wydaje się, że analiza związku pomiędzy dokładnością otrzymanego wyniku a czasem przeprowadzenia analizy wskazuje, że wybór siatki pomiarowej opartej na 500 punktach jest optymalny.
EN
Planimetric analysis is a well-known and commonly used method for determining the content of a given component in the area of the analyzed surface. In geology, it is used to determine both the mineral and maceral composition, and its results are helpful in solving a wide range of research problems. The aim of this study was to investigate the relationships between the obtained results of planimetric analysis and the density of the measurement grids, which can be adapted to specific situations in geological samples in order to optimize the ratio of the accuracy of the result to the time of analysis. The research was divided into two stages. Models were used in the first stage, while in the second, coal samples were investigated. In the first stage, a virtual grid of measurement points with dimensions of 100 by 100 points was created, which gave a total of 10 000 points. After creating the measurement grid model, 102 scenarios were established, differing in the content of the analyzed component A (0.1%, 0.5% and from 1 to 100%). In each of the adopted scenarios, 100 combinations were carried out so that the arrangement of points on the measurement grid corresponding to component A was random. Then, the number of measuring points was reduced several times from the original 10 000 to 100 and each time content of component A was calculated. The obtained average component A content in most cases turned out to be only slightly different from the original, which cannot be stated in the case of both minimum and maximum values – here a wider range of results is observed. In the second stage, 3 coal samples of varying maceral composition complexity were investigated and a measurement grid of 100 by 100 points was used. The original grid density was gradually reduced (down to 144 points) and maceral composition was calculated each time. Results obtained from models as well as from samples show a clear trend of decreasing accuracy with decreasing density of the measuring grid. The obtained results were also analyzed in terms of the established acceptance criteria for which the content of the component was assumed to be different from its content based on a grid of 10 000 points by up to 5%, 10% or 30% of the original value. The analysis of the relationship between the accuracy of the obtained results and the time of conducting the analysis seems to indicate that the choice of the measurement grid based on 500 points is optimal.
PL
Artykuł jest drugim z cyklu artykułów o energochłonności operacji rozdrabniania i problemach modelowania tej operacji. W pracy przedstawiono uogólnioną postać klasycznych hipotez rozdrabniania i sposoby szacowania parametrów w tych modelach. Omówiono również zagadnienie standaryzacji badać za pomocą różnych technik eksperymentalnych z przykładami własnych badań w tym zakresie.
EN
The paper is the second article in a series on the energy consumption of comminution and problems of modeling this operation for practical purposes. A generalised model of classical hypotheses and methods of assessing their parameters are presented. Moreover, the issue of the standardisation of testing with various experimental techniques, including the selected results of our own experiments, is discussed.
EN
The industrial sector of the Polish economy plays an important role in ensuring the socio-economic development of the country. The Polish industry accounts for 24.1 % of the country’s employed population and 25.1 % of the GVA. The article aims to model the structural parameters of the Polish industrial sector according to the criterion of increasing product innovation level based on a comprehensive assessment of the Polish industry performance in the regional context. The offered method focuses on estimating the industrial sector at the macro and meso levels using a set of indicators for investment, innovation, labour activity, and profitability. Correlation-regression analysis methods were used to prove hypotheses about the impact of product innovation on employment and wages in the industry. To optimise the structure of the Polish industrial sector, an economic-mathematical model was developed, which was solved using the linear programming method. The target functionality of this model is the level of product innovation, at which the gross average monthly wage of Polish industry workers will double (to the EU average). The simulation results, which was based on data from the Central Statistical Office of Poland, provide an analytical basis for selecting industrial policy benchmarks for Poland.
EN
Artificial intelligence (AI) is changing many areas of technology in the public and private spheres, including the economy. This report reviews issues related to machine modelling and simulations concerning further development of mechanical devices and their control systems as part of novel projects under the Industry 4.0 paradigm. The challenges faced by the industry have generated novel technologies used in the construction of dynamic, intelligent, flexible and open applications, capable of working in real time environments. Thus, in an Industry 4.0 environment, the data generated by sensor networks requires AI/CI to apply close-to-real-time data analysis techniques. In this way industry can face both fresh opportunities and challenges, including predictive analysis using computer tools capable of detecting patterns in the data based on the same rules that can be used to formulate the prediction.
PL
Artykuł jest krótkim przeglądem rozwoju nauki o rozdrabianiu skał w kierunku zwiększenia efektywności procesów produkcyjnych na drodze doskonalenia narzędzi analitycznych do projektowania technologii, doboru odpowiedniej wielkości maszyn oraz kontroli procesów w instalacjach przemysłowych przeróbki kruszyw mineralnych.
EN
The article is a short review of the development of the science of comminution towards increasing the efficiency of production processes by improving analytical tools used for designing technologies, selecting the adequate size of machines and process control in industrial installations for mineral aggregate processing.
EN
The paper considers developed and offered an effective algorithm for solving the block-symmetrical tasks of polynomial computational complexity of data processing modular block-schemes designing. Currently, there are a large number of technologies and tools that allow you to create information systems of any class and purpose. To solve the problems of designing effective information systems, various models and methods are used, in particular, mathematical discrete programming methods. At the same time, it is known that such tasks have exponential computational complexity and can not always be used to solve practical problems. In this regard, there is a need to develop models and methods of the new class, which provide the solution of applied problems of discrete programming, aimed at solving problems of large dimensions. The work has developed and proposed block-symmetric models and methods as a new class of discrete programming problems that allow us to set and solve applied problems from various spheres of human activity. The issues of using the developed models are considered. and methods for computer-aided design of information systems (IS).
EN
Solar radiation (Rs) is an essential input for estimating reference crop evapotranspiration, ETo. An accurate estimate of ETo is the first step involved in determining water demand of field crops. The objective of this study was to assess the accuracy of fifteen empirical solar radiations (Rs) models and determine its effects on ETo estimates for three sites in humid tropical environment (Abakaliki, Nsukka, and Awka). Meteorological data from the archives of NASA (from 1983 to 2005) was used to derive empirical constants (calibration) for the different models at each location while data from 2006 to 2015 was used for validation. The results showed an overall improvement when comparing measured Rs with Rs determined using original constants and Rs using the new constants. After calibration, the Swartman–Ogunlade (R2 = 0.97) and Chen 2 models (RMSE = 0.665 MJ∙m–2∙day–1) performed best while Chen 1 (R2 = 0.66) and Bristow–Campbell models (RMSE = 1.58 MJ∙m–2∙day–1) performed least in estimating Rs in Abakaliki. At the Nsukka station, Swartman–Ogunlade (R2 = 0.96) and Adeala models (RMSE = 0.785 MJ∙m–2∙day–1) performed best while Hargreaves–Samani (R2 = 0.64) and Chen 1 models (RMSE = 1.96 MJ∙m–2∙day–1) performed least in estimating Rs. Chen 2 (R2 = 0.98) and Swartman–Ogunlade models (RMSE = 0.43 MJ∙m–2∙day–1) performed best while Hargreaves–Samani (R2 = 0.68) and Chen 1 models (RMSE = 1.64 MJ∙m–2∙day–1) performed least in estimating Rs in Awka. For estimating ETo, Adeala (R2 =0.98) and Swartman–Ogunlade models (RMSE = 0.064 MJ∙m–2∙day–12 = 0.98) and Chen 2 models (RMSE = 0.43 MJ∙m–2∙day–1) performed best at Abakaliki while Angstrom–Prescott–Page (R2 = 0.96) and El-Sebaii models (RMSE = 0.0908 mm∙day–1) performed best at the Nsukka station.
PL
Promieniowanie słoneczne (Rs) stanowi istotny czynnik w trakcie określania ewapotranspiracji potencjalnej (ETo) terenów uprawnych. Dokładne oszacowanie ETo jest pierwszym etapem ustalania zapotrzebowania na wodę pól uprawnych. Celem tego badania była ocena dokładności piętnastu empirycznych modeli Rs i oznaczenie wpływu tego parametru na szacunki ewapotranspiracji w trzech stanowiskach wilgotnego środowiska tropikalnego (Abakaliki, Nsukka i Awka). Wykorzystano archiwalne dane meteorologiczne NASA z lat 1983 do 2003 do wyprowadzenia empirycznych stałych (kalibracja) dla różnych modeli w każdej z trzech lokalizacji, a dane z lat 2006 do 2015 posłużyło do oceny. Wyniki wskazują na większą zgodność mierzonego Rs i oszacowanych wartości promieniowania wyznaczonego z zastosowaniem nowych stałych. Po kalibracji modele Swartmana–Ogunladego (R2 = 0,97) i Chena 2 (RMSE = 0,665 MJ∙m–2∙d–1) dawały najlepsze wyniki, podczas gdy modele Chena 1 (R2 = 0,66) i Bristowa–Campbella (RMSE = 1,58 MJ∙m–2∙d–1) były najmniej dokładne w wyznaczaniu Rs w Akabaliki. W stacji Nsukka modele Swartmana–Ogunladego (R2 = 0,96) i Adeali (RMSE = 0,785 MJ∙m–2∙d–1) dawały najlepiej dostosowane wyniki oszacowania Rs, natomiast modele Hargreavesa–Samaniego (R2 = 0,64) i Chena 1 (RMSE = 1,96 MJ∙m–2∙d–1) najmniej. Modele Chena 2 (R2 = 0,98) i Swartmana–Ogunladego (RMSE = 0,43 MJ∙m–2∙d–1) okazały się najlepsze, a modele Hargreavesa–Samaniego (R2 = 0,68) i Chena 1 (RMSE = 1,64 MJ∙m–2∙d–1) – najgorsze w ustalaniu promieniowania w stanowisku Awka. W oszacowaniach ETo modele Adeali (R2 = 0,98) i Swartmana– Ogunladego (RMSE = 0.064 MJ∙m–2∙d–1) dawały najlepsze wyniki w przypadku danych ze stanowiska Awka, a modele Swartmana–Ogunladego (R2 = 0,98) i Chena 2 (RMSE = 0,43 MJ∙m–2∙d–1) okazały się najlepsze w przypadku danych ze stanowiska Abakaliki. W odniesieniu do stanowiska Nsukka najlepsze wyniki uzyskano, stosując modele Angstroma– Prescotta–Page’a (R2 = 0,96) i El-Sebaii (RMSE = 0,0908 mm∙d–1).
EN
The article identifies the optimal location of the warehouse distribution centre for Slovenian companies in the international environment. The process of location selection takes into account a series of interconnected factors, including flows of goods between countries; the level of development of the transport system and transport infrastructure; the number of transport companies; labour costs and labour productivity; and the tax benefits existing in each country. Scientific literature mentions various methods for choosing a warehouse location, which differ in complexity and in the use of different qualitative and quantitative factors. However, the methods discussed have a disadvantage in that they use the current input variables when defining the optimal location. Choosing the optimal warehouse location is an important long-term logistics process, which should consider the fact that the environment in which companies operate is constantly changing. Using the proposed approach, future trends in the international environment are presented, which enables a better choice of warehouse location in the long run. Through this approach, companies can save on logistic costs, while also providing better quality logistics services. The analysis represents a starting point for deciding the location of a warehouse, but does not constitute a complete set of guidelines for companies to follow, as the choice of a particular location is dependent upon the complexity of the international environment in which a company operates.
PL
Since over a decade we observe intensive effort of research institutions and industrial consortia on extending flexibility and automation of the transport network control also known under the term network programmability. Key aspect of each programming interface is ability to evolve but also sensitivity to future modifications. As indicated in the past work [4] in the specific context of optical transport networks an important criterion becomes also complexity and granularity of maintained objects. The objective of this paper is to share the results of a proof of concept of optical transport network control conducted on highly flexible and easy to modify YANG-based RESTCONF protocol. Deriving configuration objects and protocol messages fields from standardized YANG models makes the programming interface easy to understand and modify. In addition, the models were selected to reflect network level control typical for carrier class deployments in the opposite to more common for data centre device level control.
EN
Od ponad dekady obserwuje się wzmożone wysiłki instytutów badawczych i konsorcjów przemysłowych zmierzające w kierunku uelastycznienia i zautomatyzowania sterowania sieciami transportowymi, szeroko znanego również pod pojęciem programowalności sieci. Podstawową cechą każdego interfejsu programowego jest jego zdolność ewoluowania oraz podatność na przyszłe modyfikacje. Jak wykazano w poprzednich pracach [4], dla szczególnego przypadku optycznych sieci transportowych ważnym kryterium staje się również złożoność i szczegółowość zarządzanych obiektów. Celem artykułu jest przedstawienie praktycznych rezultatów studium wykonalności sterowania optyczną siecią transportową za pomocą protokołu RESTCONF opatego na języku specyfikacji YANG. Otrzymywanie obiektów konfiguracyjnych i pól wiadomości protokołu z ustandaryzowanych modeli YANG czyni interfejs programowy łatwym do przyswojenia i ewentualnej modyfikacji. Co ważne, modele zostały dobrane tak, aby odzwierciedlić typową dla klasycznych instalacji operatorskich kontrolę na poziomie sieci w odróżnieniu od bardziej popularnej dla segmentu data center kontroli na poziomie urządzeń.
EN
The intelligent decision making requires the consideration of current contextual information. The article is devoted to construction of formal models for the presentation and usage of contextual information for decision-making in the field of employment. The paper analyses existing approaches to the definition of the concept of context at the conceptual level. The results of comparison of formal context models taking into consideration the requirements for employment business processes are presented. The ontological approach is selected as a basis for contextual models specification. The paper presents the formal representation of the context models for business operations of the employment sector. The model of contextual graphs for the solution of the problem of employment business operations context refinement was developed.
14
Content available Wielokryterialna analiza zdarzeń drogowych
PL
W pracy zaprezentowano problematykę bezpieczeństwa ruchu drogowego ze szczególnym uwzględnieniem ujęcia antropotechnicznego, czyli w opisie: użytkownik – pojazd – droga, a więc elementów występujących w większości prowadzonych analiz związanych z problematyką bezpieczeństwa ruchu drogowego. Podjęto próbę utworzenia opisu i zbadania systemu bezpieczeństwa ruchu drogowego w ujęciu antropotechnicznym; przypisano do poszczególnych elementów tego systemu (człowiek – pojazd ‒ droga) wyróżniki jakościowe opisujące stan składowych tego systemu i nadano wyselekcjonowanym i ważnym wyróżnikom wartości liczbowe, które następnie z wykorzystaniem metod analizy statystycznej umożliwiły zbudowanie modelu bezpieczeństwa ruchu (BRD). Przytoczono dane statystyczne z wypadków drogowych w poprzednich latach z wyróżnieniem poszczególnych elementów systemu antropotechnicznego.
EN
The article presents the issues of road safety with particular emphasis on the anthropotechnical approach, in the description: user ‒ vehicle ‒ road, and therefore the elements occurring in most analyzes related to road safety issues. An attempt was made to create a description and study of the traffic safety system in an anthropotechnical sense, assigning individual elements of this system (human ‒ vehicle ‒ road) to qualitative characteristics describing the state of components of this system and assigning selected and important numerical values. Model of traffic safety (BRD). Statistical data on traffic accidents are presented in previous years, the distinction of particular elements of the anthropotechnical system was highlighted.
15
EN
Results from a series of five surveys among five groups of international climate scientists about their evaluation of elements of climate models and of climate change are presented. The first survey was done in 1996, the latest in 2015/16. Thus, our snapshots of the opinions of climate scientists cover 20 years. The results describe a strong increase in agreement concerning issues of manifestation of climate change, i.e., that the warming is real and not influenced by changing measuring and reporting practices, and concerning attribution of this ongoing climate change to ongoing anthropogenic causes. On the other hand, the evaluation of the climate models has changed little in the past 20 years. There are still significant reservations with the models ability to incorporate clouds and to describe rainfall. Obviously the growing conviction of ongoing man-made climate change is based on a variety of explanations, with modelling not being the predominant line of evidence. We suggest that it may be the repeated assessments by the IPCC, based on paleoclimatic evidence and stringent statistical analysis of the instrumental record which have led to the growing consensus of the warming and its causation. We stress that the presented results concern the opinion of climate scientists with a rather broad background. Our results do not assess if the opinions of the surveyed scientists are “valid” or “right”, but they recognize the character of science being a social process.
16
Content available Modeling of gas consumption in the city
EN
Based on the data collected over a two year time period, which included temperature, wind speed and gas consumption during the day, the effects of weather factors on gas consumption in the city have been established with the use of multiple regression. The impact of a particular month, day (dummy variable) or holiday of a year on the gas consumption has also been determined. The models of linear regression and artificial neural networks have been constructed for determining the gas consumption. An attempt has been made to find the best regression models and compare them to the neural network models with the use of mean absolute percentage error (MAPE).
EN
This paper reports the application of poly(azomethinethioamide) (PATA) resin having the pendent chlorobenzylidine ring for the removal of heavy metal ions such as Zn(II) and Ni(II) ions from the aqueous solutions by adsorption technology. Kinetic, equilibrium and thermodynamic models for Zn(II) and Ni(II) ions adsorption were applied by considering the effect of contact time, initial metal ion concentration and temperature data, respectively. The adsorption influencing parameters for the maximum removal of metal ions were optimized. Adsorption kinetic results followed the pseudo-second order kinetic model based on the correlation coefficient (R2) values and closed approach of experimental and calculated equilibrium adsorption capacity values. The removal mechanism of metal ions by PATA was explained with the Boyd kinetic model, Weber and Morris intraparticle diffusion model and Shrinking Core Model (SCM). Adsorption equilibrium results followed the Freundlich model based on the R2 values and error functions. The maximum monolayer adsorption capacity of PATA for Zn(II) and Ni(II) ions removal were found to be 105.4 mg/g and 97.3 mg/g, respectively. Thermodynamic study showed the adsorption process was feasible, spontaneous, and exothermic in nature.
Logistyka
|
2015
|
nr 4
7677--7681, CD2
PL
W prace przyprowadzono analizę możliwości wykorzystania modeli obliczenia ryzyka, które użytkuje się w zadaniach ubezpieczenia, dla oceny możliwości powstania nadzwyczajnych zdarzeń w środowisku fizycznym. Dzięki określeniu wielkości ryzyka powstania nadzwyczajnych zdarzeń, są możliwe efektywnie rozwiązanie zadań, które powstaję, łącznie zadań logistyki, przy organizacji prac ratowniczych.
EN
This work analyzes the possibility of using models of risk value evaluation to estimate the possibility of an emergency. Thanks to evaluation of such risk value it is possible to effectively solve problems related to organization of rescue operations when emergencies occur. Taking these estimations into account allows to effectively solve logistic problems related to providing necessary supplies during rescue operations.
PL
Efektywność kształcenia jest zagadnieniem interdyscyplinarnym i wieloparametrycznym. Integruje ono w sobie podejścia: pedagogiczne, społeczne, ekonomiczne. Uwzględnia społeczne potrzeby, rozwój indywidualny jednostki, odnosi się do skuteczności procesu dydaktycznego Potrzeba badania efektywności kształcenia wynika z istoty edukacji. Proces dydaktyczny jest działalnością planową, polegającą na świadomym realizowaniu wyznaczonych celów kształcenia, przy odpowiednio zaplanowanych treściach, metodach i środkach W artykule przedstawiono wybrane ujęcia i modele teoretyczne wykorzystywane w ewaluacji procesów kształcenia. Badanie oceny efektywności kształcenia służy stwierdzeniu, w jakim stopniu cele kształcenia zostały osiągnięte, czyli, jakie zmiany zaszły w zakresie wiedzy, umiejętności i postaw osób uczących się. Wyniki uzyskane w procesach ewaluacyjnych służą udoskonaleniu procesu kształcenia.
EN
Effectiveness of education is an interdisciplinary and a multilayered problem. It consists of several approaches: pedagogical, social and economic. It takes into account social needs, individual progress of entities, also refers to the effectiveness of the learning process. The need to research such effectiveness comes directly from the concept of education itself .As it is, that process requires a plan which means a conscious realization of selected goals of education along with the properly plan red content, methods and means of teaching. The article presents chosen approaches and theoretical models used to evaluate educational processes. Such evaluation is a means to establishing the level of achieved goals, by which one should understand the measure of changes that happened in the scope of knowledge. skills and attitudes of persons undertaking the education. The results coming from the evaluation can be used to improve the educational processes.
EN
This publication presents the ongoing development of visual teaching technology in IT systems, which can be used for e-learning for the “Millennial Generation”. The analysis of different models of teaching making use of visual messages, leads to the conclusion that systems more advanced in VPN technologies possess substantial educational qualities. These systems include TightVPN, UltraVNC, OpenVPN, RealVNC or Radmin and ComodoUNITE as well as TeamViewer.
first rewind previous Strona / 4 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.