Nowa wersja platformy, zawierająca wyłącznie zasoby pełnotekstowe, jest już dostępna.
Przejdź na https://bibliotekanauki.pl
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 19

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  Monte Carlo simulations
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
Open Physics
|
2014
|
tom 12
|
nr 6
421-426
EN
We present the results of a multicanonical Monte Carlo study of flexible and wormlike polymer chains, where we investigate how the polymer structures observed during the simulations, mainly coil, liquid, and crystalline structures, can help to construct a hyperphase diagram that covers different polymer classes according to their thermodynamic behavior.
2
Content available remote Simulation of dissociation of DNA duplexes attached to the surface
100%
EN
We present Monte Carlo simulations of dissociation of duplexes formed of complementary single-stranded DNAs with one of the strands attached to the surface. To describe the transition from the bound state to the unbound state of two strands located nearby, we use a lattice model taking DNA base-pair interactions and comformational changes into account. The results obtained are employed as a basis for a more coarse-grained model including strand backward association and diffusion resulting in complete dissociation. The distribution of the dissociation time is found to be exponential. This finding indicates that the non-exponential kinetic features observed in the corresponding experiments seem to be related to extrinsic factors, e.g., to the surface heterogeneity.
3
Content available remote The zone strong coupling two-channel totally asymmetric simple exclusion processes
100%
Open Physics
|
2011
|
tom 9
|
nr 4
1077-1083
EN
This article investigates the zone strong coupling two-channel totally asymmetric simple exclusion processes (TASEPs). The study is based on Pronina and Kolomeisky’s work [J. Phys. A-Math. Gen. 37, 9907 (2004)], in which the coupling exists within two whole parallel channels. Zone strong coupling two-channel TASEPs focuses on the behavior and the effect of a particular segment rather than the whole channel. The study shows that there are five possible stationary phases; LD/LD, HD/HD, MC/LD, LD/HD, and MC/HD. The phase diagrams and the density profiles are investigated using computer Monte Carlo simulations and mean-field approximation. The outcomes of the simulations match agreeably with the analytical predictions.
4
100%
EN
Polymer translocation through the nanochannel is studied by means of a Monte Carlo approach, in the presence of a static or oscillating external electric voltage. The polymer is described as a chain molecule according to the two-dimensional “bond fluctuation model”. It moves through a piecewise linear channel, which mimics a nanopore in a biological membrane. The monomers of the chain interact with the walls of the channel, modelled as a reflecting barrier. We analyze the polymer dynamics, concentrating on the translocation time through the channel, when an external electric field is applied. By introducing a source of coloured noise, we analyze the effect of correlated random fluctuations on the polymer translocation dynamics.
EN
A system exists which meets a prescription of the efficacious multiple criteria decision making support methodology. It is called the Analytic Hierarchy Process (AHP). The consistency control of human pairwise judgments about their preferences towards alternative choices appears to be the crucial issue in this concept. This research examines the efficiency of a recently proposed consistency index grounded on the redefined idea of triads inconsistency within Pairwise Comparison Matrices. The quality of the recently introduced proposal is studied and compared to other ideas with application of Monte Carlo simulations coded and run in Wolfram Mathematica 8.0.
EN
In the process of sample selection, an important issue is the relationship between sample size and the type and complexity of the statistical model, which is the basis for testing research hypotheses. The paper presents methodological aspects of sample size determination in multilevel structural equation modelling (SEM) in the analysis of satisfaction with the banking products in Poland. The multilevel SEM results from the necessity to take into account both the sample size at the level of individual respondents, as well as at the higher level of analysis and the intraclass correlation coefficient. A comparison of factor loading bias based on the Monte Carlo simulation is made for different cluster sizes and the number of clusters.
Open Physics
|
2008
|
tom 6
|
nr 2
296-305
EN
In the last few years there has been significant interest in the field of thin films, due to numerous specific phenomena related to the low dimension of these systems, and to the large opportunities in development of high technologies based on their specific magnetic and electronic properties. When dealing with systems of reduced dimensionality it is important to take into account the influence of magnetic anisotropies. In this paper we investigate the magnetic properties of bilayer thin film. This behavior is modeled using Monte Carlo simulations, in the Extended Anisotropic Heisenberg Model. The magnetization, out-of-plane and in-plane magnetic susceptibilities, and also the specific heat bearings according to temperature are investigated in order to find the potential magnetic ordering phases and the critical temperatures, for two sets parameter assignments. For quasi-uniform anisotropy parameters of the film we detect the ferromagnetism-paramagnetism transition and then, by changing the model parameters values, we relieve a short range ferromagnetic ordering phase arising from the antiferromagnetic base layer coupling influence and from easy-plane anisotropy discontinuity on the layers interface.
EN
Introduction: Proton radiotherapy offers an advantage in sparing healthy tissue compared to photon therapy due to the specific interaction of protons with the patient’s body. In radiobiological experiments, alpha sources are commonly used instead of proton accelerators for convenience, but ensuring a uniform dose distribution is challenging. Properly designing the cell irradiation setup is crucial to reliably measure the average cellular response in such experiments. The objective of this research is to underscore the importance of dosimetric validation in radiobiological investigations. While Monte Carlo (MC) simulations offer valuable insights, their accuracy needs experimental confirmation. Once consistent results are obtained, the reliance on simulations becomes viable, as they are more efficient and less cumbersome compared to experimental procedures. Material and methods: The simulations are performed with three MC code-based tools: Geant4-DNA, GATE, and SRIM to model the alpha radiation source and calculate dose distributions for various cell irradiation scenarios. Dosimetric verification of the experimental setup containing a 241Am source is performed using radiochromic films. Additionally, a clonogenic cell survival assay is carried out for the DU145 cell line. Results: The study introduces a novel source simulation model derived from dosimetric measurements. The comparison between dosimetric results obtained with simulations and measured experimentally yields a gamma (3%/3mm) parameter value exceeding 99%. Furthermore, the LQ model parameters fitted to survival data of DU145 cells irradiated with particles emitted from 241Am source demonstrate consistency with previously published findings. Conclusions: Radiobiological experiments investigate cellular responses to various irradiation scenarios. Challenges arise with densely ionizing radiation used in clinical practice, particularly in ensuring uniform dose delivery for reliable experiments. MC codes aid in simulating dose distribution and designing irradiation systems for consistent cell treatment. However, experimental validation is essential before relying on simulation results. Once confirmed, these results offer a cost-effective and time-efficient approach to planning radiobiological experiments compared to traditional laboratory work.
9
Content available remote Numerical study of the three-state Ashkin-Teller model with competing dynamics
88%
EN
An open ferromagnetic Ashkin-Teller model with spin variables 0, ±1 is studied by standard Monte Carlo simulations on a square lattice in the presence of competing Glauber and Kawasaki dynamics. The Kawasaki dynamics simulates spin-exchange processes that continuously flow energy into the system from an external source. Our calculations reveal the presence, in the model, of tricritical points where first order and second order transition lines meet. Beyond that, several self-organized phases are detected when Kawasaki dynamics become dominant. Phase diagrams that comprise phase boundaries and stationary states have been determined in the model parameters’ space. In the case where spin-phonon interactions are incorporated in the model Hamiltonian, numerical results indicate that the paramagnetic phase is stabilized and almost all of the self-organized phases are destroyed.
10
Content available remote Protein modeling and structure prediction with a reduced representation.
88%
EN
Protein modeling could be done on various levels of structural details, from simplified lattice or continuous representations, through high resolution reduced models, employing the united atom representation, to all-atom models of the molecular mechanics. Here I describe a new high resolution reduced model, its force field and applications in the structural proteomics. The model uses a lattice representation with 800 possible orientations of the virtual alpha carbon-alpha carbon bonds. The sampling scheme of the conformational space employs the Replica Exchange Monte Carlo method. Knowledge-based potentials of the force field include: generic protein-like conformational biases, statistical potentials for the short-range conformational propensities, a model of the main chain hydrogen bonds and context-dependent statistical potentials describing the side group interactions. The model is more accurate than the previously designed lattice models and in many applications it is complementary and competitive in respect to the all-atom techniques. The test applications include: the ab initio structure prediction, multitemplate comparative modeling and structure prediction based on sparse experimental data. Especially, the new approach to comparative modeling could be a valuable tool of the structural proteomics. It is shown that the new approach goes beyond the range of applicability of the traditional methods of the protein comparative modeling.
EN
A new approach to comparative modeling of proteins, TRACER, is described and benchmarked against classical modeling procedures. The new method unifies true three-dimensional threading with coarse-grained sampling of query protein conformational space. The initial sequence alignment of a query protein with a template is not required, although a template needs to be somehow identified. The template is used as a multi-featured fuzzy three-dimensional scaffold. The conformational search for the query protein is guided by intrinsic force field of the coarse-grained modeling engine CABS and by compatibility with the template scaffold. During Replica Exchange Monte Carlo simulations the model chain representing the query protein finds the best possible structural alignment with the template chain, that also optimizes the intra-protein interactions as approximated by the knowledge based force field of CABS. The benchmark done for a representative set of query/template pairs of various degrees of sequence similarity showed that the new method allows meaningful comparative modeling also for the region of marginal, or non-existing, sequence similarity. Thus, the new approach significantly extends the applicability of comparative modeling.
12
75%
EN
Time series models are a popular tool commonly used to describe time-varying phenomena. One of the most popular models is the Gaussian AR. However, when the data have outlier observations with "large" values, Gaussian models are not a good choice. We therefore abandon the assumption of normality of the data distribution and propose the AR model based on the double Pareto distribution. We introduce the estimators of the model's parameters, obtained by the maximum likelihood method. For this purpose, we use the Maclaurin series expansion and the Chebyshev polynomials expansion of the likelihood function. We compare the results with the Yule-Walker estimator in the finite variance case and with the modified Yule-Walker estimator in the infinite variance case. The accuracy of the results obtained was checked by Monte Carlo simulations.
PL
Modele szeregów czasowych to popularne narzędzie powszechnie stosowane do modelowania zjawisk zmiennych w czasie. Najpopularniejszym modelem jest gaussowski model AR, który jest stacjonarny. Jednak gdy w danych występują obserwacje odstające o „dużych“ wartościach, modele gaussowskie nie są odpowiednim narzędziem do ich modelowania. Odchodzimy zatem od założenia o normalności rozkładu danych i proponujemy model AR oparty na podwójnym rozkładzie Pareto. Przedstawiamy estymatory parametrów modelu, uzyskane metodą największej wiarygodności. W tym celu wykorzystujemy rozwinięcie funkcji warogodności w szereg zmodyfikowanym estymatorem Yule-Walkera w przypadku nieskończonej wariancji. Poprwaność otrzymanych wyników została sprawdzona za pomocą symulacji Monte Carlo.
EN
Background: In today’s highly volatile and unpredictable market conditions, there are very few investment strategies that may offer a certain form of capital protection. The concept of portfolio insurance strategies presents an attractive investment opportunity. Objectives: The main objective of this article is to test the use of portfolio insurance strategies in Southeast European (SEE) markets. A special attention is given to modelling non-risky assets of the portfolio. Methods/Approach: Monte Carlo simulations are used to test the buy-and-hold, the constant-mix, and the constant proportion portfolio insurance (CPPI) investment strategies. A covariance discretization method is used for parameter estimation of bond returns. Results: According to the risk-adjusted return, a conservative constant mix was the best, the buy-and-hold was the second-best, and the CPPI the worst strategy in bull markets. In bear markets, the CPPI was the best in a high-volatility scenario, whereas the buy-and-hold had the same results in low- and medium-volatility conditions. In no-trend markets, the buy-and-hold was the first, the constant mix the second, and the CPPI the worst strategy. Higher transaction costs in SEE influence the efficiency of the CPPI strategy. Conclusions: Implementing the CPPI strategy in SEE could be done by combining stock markets from the region with government bond markets from Germany due to a lack of liquidity of the government bond market in SEE.
EN
The evaluation and improvement of forecasts accuracy generate growth in the quality of decisional process. In Romania, the most accurate predictions for the unemployment rate on the forecasting horizon 2001-2012 were provided by the Institute for Economic Forecasting (IEF) that is followed by European Commission and National Commission for Prognosis (NCP). The result is based on U1, but if more indicators are taken into consideration at the same time using the multi-criteria ranking, the conclusion remains the same. A suitable strategy for improving the degree of accuracy for these forecasts is represented by the combined forecasts. The accuracy of NCP predictions can be improved on the horizon 2001-2012, if the initial values are smoothed using Holt-Winters technique and Hodrick-Prescott filter. The use of Monte Carlo method to simulate the forecasted unemployment rate proved to be the best way to improve the predictions accuracy. Starting from an AR(1) model for the interest variable, the uncertainty analysis was included, the simulations being made for the parameters. Actually, the means of the forecasts distributions for unemployment are considered as point predictions which outperform the expectations of the three institutions. The strategy based on Monte Carlo method is an original contribution of the author introduced in this article regarding the empirical strategies of getting better predictions.
16
Content available remote Model symulacyjny ,,Monte Carlo" sterowania wielkością zamówienia
75%
EN
A high resolution reduced model of proteins is used in Monte Carlo dynamics studies of the folding mechanism of a small globular protein, the B1 immunoglobulin-binding domain of streptococcal protein G. It is shown that in order to reproduce the physics of the folding transition, the united atom based model requires a set of knowledge-based potentials mimicking the short-range conformational propensities and protein-like chain stiffness, a model of directional and cooperative hydrogen bonds, and properly designed knowledge-based potentials of the long-range interactions between the side groups. The folding of the model protein is cooperative and very fast. In a single trajectory, a number of folding/unfolding cycles were observed. Typically, the folding process is initiated by assembly of a native-like structure of the C-terminal hairpin. In the next stage the rest of the four-ribbon β-sheet folds. The slowest step of this pathway is the assembly of the central helix on the scaffold of the β-sheet.
18
Content available Symulacyjna ocena rozmiaru testu BDS
63%
PL
Test BDS jest jednym z najważniejszych i najczęściej stosowanych narzędzi detekcji nieliniowości w szeregach czasowych. W artykule, przy zastosowaniu symulacji Monte Carlo, analizie poddano jego rozmiar. Symulacje przeprowadzono przy zastosowaniu szeregów liczb pseudolosowych o różnych długościach, wygenerowanych z siedmiu rozkładów o zróżnicowanych własnościach. W badaniu uwzględniono trzy sposoby aproksymacji rozkładu statystyki testowej: klasyczny – polegający na zastosowaniu asymptotycznego rozkładu normalnego oraz dwie metody próbkowania – bootstrap oraz metodę permutacji.
EN
The BDS test is one of the most important and most commonly used tools for detection of nonlinearity in time series. In the paper, the size of the BDS test is assessed using Monte Carlo simulations. The simulation uses pseudo-random series of different length, generated from seven distributions with different properties. In the research, the approximation of the finite sample distribution of the BDS statistic was performed using three methods: the classical one – based on the asymptotic normal distribution and two resampling methods: the bootstrap and the permutation technique.
RU
Предложенная Пирсоном в 1900 г. статистика χ2xy, является все еще са-мым важным измерителем для обследования независимости характеристик, тем более что она имеет свое расширение для трехразделительных таблиц и выше. Тем не менее возникает вопрос, какой является способность двухраз-делительных таблиц для обнаружения связи между характеристиками, то есть какой является их мощность. Трудно ответить на этот вопрос на основе анализа, поэтому наилучшим способом кажется быть разра-ботка двухразделительных таблиц и определение мощности с использованием моделированных обследований. Для двухразделительной таблицы 2х2 возможным является также определение мощности критерия с использо-ванием анализа и сопоставления полученных результатов с эмпирическими значениями. Представленные в статье результаты позволяют заинтере-сованным читателям выяснить, в какой степени мощность двухраздел-ительных таблиц зависит от численности выборки и силы связи между характеристиками. Целью статьи является предоставление готовой компьютерной импле-ментации для обследования мощности критериев двухразделительных таблиц в виде файла в Интернете. Представленные теория и примеры по-зволяют анализировать мощность критериев с использованием стати-стики χ2 Пирсона, а также моделировать ход функции плотности и фун-кции распределения центрального и нецентрального распределения хи--квадрат.
XX
Zaproponowana przez Pearsona w 1900 r. statystyka χ2xy jest wciąż najważniejszym miernikiem do badania niezależności cech, tym bardziej że ma on swoje rozszerzenia dla tablic trójdzielczych i wyższych. Pojawia się jednak pytanie, jaka jest zdolność tablic dwudzielczych do wykrywania związku między cechami, czyli jaka jest ich moc? Trudno jest odpowiedzieć na to pytanie na podstawie analizy, dlatego najlepszym sposobem wydaje się być generowanie tablic dwudzielczych i określenie mocy poprzez badania symulacyjne. Dla tablicy dwudzielczej 2×2 możliwe jest także wyznaczenie mocy testów na drodze analitycznej i porównanie uzyskanych wyników z wartościami empirycznymi. Przedstawione w pracy wyniki pozwolą czytelnikowi zorientować się, w jakim stopniu moc tablic dwudzielczych zależy od liczebności próby oraz od siły związku między cechami. Celem pracy jest dostarczenie gotowej implementacji komputerowej do badania mocy testów tablic dwudzielczych w formie pliku zamieszczonego w Internecie. Przedstawiona teoria oraz zamieszczone przykłady pozwolą czytelnikom badać moc testów z wykorzystaniem statystyki χ2 Pearsona, a także modelować przebieg funkcji gęstości i dystrybuanty centralnego i niecentralnego rozkładu chi-kwadrat.
EN
Proposed by Pearson in 1900 χ2xy formula is still the most important measure to study the characteristics independence, especially since it has its extension for three variable and higher tables. The question is, what is the ability of two variable tables to detect relationship between features, what is their power. It is difficult to answer this question on the basis of the analysis. The best way seems to be generating two variable tables and determine power through simulation studies. For the 2x2 two variable table is it also possible to designate test power on the analytical way as well as comparison of obtained analytical results with empirical values. The work results will allow the reader to get an idea of the extent to which power of two variable tables depends on the sample size and the strength of the association between features. Aim of this study is to provide a ready computer implementation to test power of two variable tables stated as a set on the Internet. Presented theory and some examples will help readers to explore the test power using Pearson's X2 statistics and model the course of the density function and cumulative distribution central and non-central chi-square distribution.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.