Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 120

Liczba wyników na stronie
first rewind previous Strona / 6 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  robustness
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 6 next fast forward last
EN
his article presents a novel approach for controlling an induction motor (IM) drive using a fractionalized order proportional integral (FrOPI) controller within an indirect field-oriented control (IFOC) scheme. In contrast to the conventional Integer Order PI controllers (IOPI), the FrOPI controllers demonstrate enhanced performance owing to their nonlinear characteristics and the inherent iso-damping property of fractional order operators. The performance of the induction motor is thoroughly assessed under various conditions, including starting, running, speed reversal, and sudden changes in load torque. Simulation results are then presented to confirm the effectiveness of the induction motor drive when utilizing the FrOPI controller.
PL
Ten artykuł prezentuje nowatorskie podejście do sterowania napędem silnika indukcyjnego (IM) za pomocą regulatora ułamkowego rzędu typu proporcjonalno-całkowego (FrOPI) w ramach pośredniego sterowania zorientowanego na pole (IFOC). W przeciwieństwie do konwencjonalnych regulatorów PI o całkowitym rzędzie (IOPI), regulatory FrOPI wykazują poprawioną wydajność dzięki swoim nieliniowym właściwościom i wrodzonej właściwości izodempingowej operatorów rzędu ułamkowego. Wydajność silnika indukcyjnego jest dokładnie oceniana w różnych warunkach, w tym podczas rozruchu, pracy, zmiany prędkości i nagłych zmian momentu obciążenia. Wyniki symulacji są następnie przedstawione w celu potwierdzenia skuteczności napędu silnika indukcyjnego przy użyciu regulatora FrOPI.
PL
W artykule omówiono badanie nośności awaryjnej laminowanej płyty szklanej mocowanej punktowo. Celem badania było sprawdzenie nośności płyty w sytuacji po zbiciu górnej tafli szkła. Badanie obejmowało trzy etapy: obciążenie wstępne płyty, ponowne obciążenie po zbiciu górnej warstwy oraz obciążenie do momentu zniszczenia. Wyniki pokazały, że po zbiciu górnej tafli płyta nadal może przenosić obciążenia dzięki współpracy międzywarstwy z folii EVA z pozostałymi warstwami. Wnioski płynące z artykułu wskazują, że badany rodzaj płyty szklanej ze zbitą górną warstwą nadal jest bezpieczny dla użytkowników i może być eksploatowany bez potrzeby natychmiastowej wymiany. W artykule opisano również procesy technologiczne powstawania płyt szklanych, w tym hartowanie i laminację.
EN
The article discusses a test of the robustness of a point-fixed laminated glass plate. The purpose of the test was to check the load carrying capacity of the plate in an emergency situation, after the top sheet of glass was shattered. The test included three stages: preloading of the slab, reloading after the top layer was shattered and loading until failure. The results showed that after the top layer is broken, the panel can still carry the load due to the cooperation of the EVA foil interlayer with the other layers. The conclusions of the article indicate that a tested type of glass slab with a broken top layer is still safe for users and can be operated without the need for immediate replacement. The article also describes the technological processes of glass plate formation, including tempering and lamination.
3
Content available remote Unconditional Token Forcing: Extracting Text Hidden Within LLM
EN
With the help of simple fine-tuning, one can artificially embed hidden text into large language models (LLMs). This text is revealed only when triggered by a specific query to the LLM. Two primary applications are LLM fingerprinting and steganography. In the context of LLM fingerprinting, a unique text identifier (fingerprint) is embedded within the model to verify licensing compliance. In the context of steganography, the LLM serves as a carrier for hidden messages that can be disclosed through a designated trigger. Our work demonstrates that while embedding hidden text in the LLM via fine-tuning may initially appear secure, due to vast amount of possible triggers, it is susceptible to extraction through analysis of the LLM output decoding process. We propose a novel approach to extraction called Unconditional Token Forcing. It is premised on the hypothesis that iteratively feeding each token from the LLM’s vocabulary into the model should reveal sequences with abnormally high token probabilities, indicating potential embedded text candidates. Additionally, our experiments show that when the first token of a hidden fingerprint is used as an input, the LLM not only produces an output sequence with high token probabilities, but also repetitively generates the fingerprint itself. Code is available at github.com/jhoscilowic/zurek-stegano.
EN
Comparison of two survival functions, which describe the probability of not experiencing an event of interest by a given time point in two different groups, is a typical task in survival analysis. There are several well-established methods for comparing survival functions, such as the log-rank test and its variants. However, these methods often come with rigid statistical assumptions. In this work, we introduce a non-parametric alternative for comparing survival functions that is nearly free of assumptions. Unlike the log-rank test, which requires the estimation of hazard functions derived from (or facilitating the derivation of) survival functions and assumes a minimum number of observations to ensure asymptotic properties, our method models all possible scenarios based on observed data. These scenarios include those in which the compared survival functions differ in the same way or even more significantly, thus allowing us to calculate the p-value directly. Individuals in these groups may experience an event of interest at specific time points or may be censored, i.e., they might experience the event outside the observed time points. Focusing on all scenarios where survival probabilities differ at least as much as observed usually requires computationally intensive calculations. Censoring is treated as a form of noise, increasing the range of scenarios that need to be calculated and evaluated. Therefore, to estimate the p-value, we compare a greedy approach that computes all possible scenarios in which groups' survival functions differ as observed or more, with a Monte Carlo simulation of these scenarios, alongside a traditional approach based on the log-rank test. Our proposed method reduces the first type error rate, enhancing its utility in studies where robustness against false positives is critical. We also analyze the asymptotic time complexity of both proposed approaches.
EN
The article contains selected results of comparative research on the quality of the parametric model, corrected in selected situations with the use of ANN and the Day-Ahead Market system of TGE S.A. carried out in MATLAB and Simulink. The System Identification Toolbox library was used for identification tests and Simulink for simulation and comparative tests. The comparative studies used such measures of model and system quality as: efficiency, effectiveness and robustness. Their waveforms as well as their average values and absolute errors and relative errors between the identification model or the identification-neural model and the system were obtained. The results of general tests were shown for the hours: 6:00, 12:00, 18:00 and 24:00 in 2019, and the detailed tests for 6:00. The sensitivity of the waveforms obtained in terms of model quality and the Day-Ahead Market system was also tested, depending on the assumed values of such parameters as e.g. electricity volume or volume-weighted average price of electricity.
PL
Arm-Z to koncepcja hiperredundantnego manipulatora robotycznego składającego się z przystających modułów o jednym stopniu swobody (1-DOF) i realizującego (prawie) dowolne ruchy w przestrzeni. Zasadnicze zalety Arm-Z to: ekonomizacja (dzięki masowej produkcji identycznych elementów) oraz odporność na awarie (po pierwsze - zepsute moduły mogą być łatwo zastąpione, po drugie - nawet gdy jeden lub więcej modułów ulegnie awarii - manipulator taki może ciągle wykonywać, prawdopodobnie w stopniu ograniczonym, zakładane zadania). Podstawową wadą systemu Arm-Z jest jego nieintuicyjne, bardzo trudne sterowanie. Innymi słowy, połączenie koncepcji nietrywialnego modułu z formowaniem praktycznych konstrukcji oraz sterowanie ich rekonfiguracją (transformacją ze stanu A do B) są bardzo złożone obliczeniowo. Mimo to, prezentowane podejście jest racjonalne, zważywszy powszechną dostępność wielkich mocy obliczeniowych w kontraście z wysokimi kosztami i „delikatnością” niestandardowych rozwiązań i urządzeń. W artykule nakreślono ogólną koncepcję manipulatora Arm-Z i zaprezentowano wstępne prace zmierzające do wykonania prototypu.
EN
Arm-Z is a concept of a robotic manipulator comprised of linearly joined congruent modules with possibility of relative twist (1 DOF). The advantages of Arm-Z are: economization (mass-production) and robustness (modules which failed can be replaced, also if some fail the system can perform certain tasks). Non-intuitive and difficult control are the disadvantages of Arm-Z. In other words, the combination of non-trivial module shape with forming of practical modular structures and their control (from state A to B) is computationally expensive. However, due to availability of modern computational power, proposed here approach is rational and competitive, especially considering the high cost and sensitivity of non-standard solutions. This paper outlines the general concept of Arm-Z manipulator and presents preliminary work towards making a proof-of-the-concept prototype.
EN
This paper is a practical guideline on how to analyze and evaluate the literature algorithms of singularity- robust inverse kinematics or to construct new ones. Additive, multiplicative, and based on the Singularity Value Decomposition (SVD) methods are examined to retrieve well-conditioning of a matrix to be inverted in the Newton algorithm of inverse kinematics. It is shown that singularity avoidance can be performed in two different, but equivalent, ways: either via properly modified manipulability matrix or not allowing the decrease of the minimal singular value below a given threshold. It is discussed which method can always be used and which can only be used when some pre‐conditions are met. Selected methods are compared to with respect to the efficiency of coping with singularities based on a theoretical analysis as well as simulation results. Also, some questions important for mathematically and/or practically oriented roboticians are stated and answered.
8
Content available Testing GNSS receiver robustness for jamming
EN
Global Navigation Satellite Systems (GNSSs) providing positioning, navigation and synchronization, has become an important element of modern systems and devices that have a crucial impact on many economy branches and the life of an common person. Literature analysis and reports from recent armed conflicts show that the use of techniques for jamming and spoofing GNSS signals is becoming increasingly. This reduces the level of safety in transport and increases the risk of improper operation of GNSS-based systems, like cellular telephony or bank sector. This paper focuses on the methodology for testing the GNSS receiver robustness for jamming. For this purpose, a broadband jamming device was developed.
PL
Globalne systemy nawigacji satelitarnej (GNSS) zapewniając pozycjonowanie, nawigację i synchronizację, stały się istotnym elementem współczesnych systemów i urządzeń, które mają kluczowy wpływ na wiele gałęzi gospodarki i życie zwykłego człowieka. Analiza literatury oraz doniesienia z ostatnich konfliktów zbrojnych pokazują, że wykorzystanie technik zagłuszania i fałszowania sygnałów GNSS staje się coraz bardziej powszechne. To powoduje zmniejszenie poziomu bezpieczeństwa w transporcie oraz wzrost ryzyka niewłaściwego działania systemów opartych na GNSS. Ten artykuł poświęcony jest metodyce testowania odporności odbiornika GNSS na zagłuszanie. W tym celu opracowane zostało szerokopasmowe urządzenie zagłuszające.
EN
A microgrid is an appropriate concept for urban areas with high penetration of renewable power generation, which improves the reliability and efficiency of the distribution network at the consumer premises to meet various loads such as domestic, industrial, and agricultural types. Microgrids comprising inverter-based and synchronous generator-based distribution generators can lead to the instability of the system during the islanded mode of operation. This paper presents a study on designing stable microgrids to facilitate higher penetration of solar power generation into a distribution network. A generalized small signal model is derived for a microgrid with static loads, dynamic loads, energy storages, solar photovoltaic (PV) systems, and diesel generators, incorporating the features of dynamic systems. The model is validated by comparing the transient curves given by the model and a transient simulator subjected to step changes. The result shows that full dynamic models of complex systems of microgrids can be built accurately, and the proposed microgrid is stable for all the considered loading situations and solar PV penetration levels according to the small signal stability analysis.
EN
The disadvantages of the conventional model predictive current control method for the grid-connected converter (GCC) with an inductance-capacitance-inductance (LCL) filter are a large amount of calculation and poor parameter robustness. Once parameters of the model are mismatched, the control accuracy of model predictive control (MPC) will be reduced, which will seriously affect the power quality of the GCC. The article intuitively analyzes the sensitivity of parameter mismatch on the current predictive control of the conventional LCL-filtered GCC. In order to solve these issues, a model-free predictive current control (MFPCC) method for the LCL-filtered GCC is proposed in this paper. The contribution of this work is that a novel current predictive robust controller for the LCL-filtered GCC is designed based on the principle of the ultra-local model of a single input single output system. The proposed control method does not require using any model parameters in the controller, which can effectively suppress the disturbances of the uncertain parameter variations. Compared with conventional MPC, the proposed MFPCC has smaller current total harmonic distortion (THD). When the filter parameters are mismatched, the control error of the proposed method is smaller. Finally, a comparative experimental study is carried out on the platform of Typhoon and PE-Expert4 to verify the superiority and effectiveness of the proposed MFPCC method for the LCL-filtered GCC.
EN
The control system described by Urysohn type integral equation is considered where the system is nonlinear with respect to the phase vector and is affine with respect to the control vector. The control functions are chosen from the closed ball of the space Lq(Ω; ℝm), q > 1, with radius r and centered at the origin. The trajectory of the system is defined as p-integrable multivariable function from the space Lp(Ω; ℝn), (1/q) + (1/p) = 1, satisfying the system’s equation almost everywhere. It is shown that the system’s trajectories are robust with respect to the fast consumption of the remaining control resource. Applying this result it is proved that every trajectory can be approximated by the trajectory obtained by full consumption of the total control resource.
EN
The fractional order proportional, integral, derivative and acceleration (PIλDµA) controller is an extension of the classical PIDA controller with real rather than integer integration action order λ and differentiation action order µ. Because the orders λ and µ are real numbers, they will provide more flexibility in the feedback control design for a large range of control systems. The Bode’s ideal transfer function is largely adopted function in fractional control systems because of its iso-damping property which is an essential robustness factor. In this paper an analytical design technique of a fractional order PIλDµA controller is presented to achieve a desired closed loop system whose transfer function is the Bode’s ideal function. In this design method, the values of the six parameters of the fractional order PIλDµA controllers are calculated using only the measured step response of the process to be controlled. Some simulation examples for different third order motor models are presented to illustrate the benefits, the effectiveness and the usefulness of the proposed fractional order PIλDµA controller tuning technique. The simulation results of the closed loop system obtained by the fractional order PIλDµA controller are compared to those obtained by the classical PIDA controller with different design methods found in the literature. The simulation results also show a significant improvement in the closed loop system performances and robustness using the proposed fractional order PIλDµA controller design.
13
Content available remote When to Trust AI: Advances and Challenges for Certification of Neural Networks
EN
Artificial intelligence (AI) has been advancing at a fast pace and it is now poised for deployment in a wide range of applications, such as autonomous systems, medical diagnosis and natural language processing. Early adoption of AI technology for real-world applications has not been without problems, particularly for neural networks, which may be unstable and susceptible to adversarial examples. In the longer term, appropriate safety assurance techniques need to be developed to reduce potential harm due to avoidable system failures and ensure trustworthiness. Focusing on certification and explainability, this paper provides an overview of techniques that have been developed to ensure safety of AI decisions and discusses future challenges.
14
Content available remote L1-Norm Principal Component Analysis Using Quaternion Rotations
EN
Principal component analysis (PCA) based on L1-norm has drawn growing interest in recent years. It is especially popular in the machine learning and pattern recognition communities for its robustness to outliers. Although optimal algorithms for L1-norm maximization exist, they have very high computational complexity and can be used for evaluation purposes only. In practice, only approximate techniques have been considered so far. Currently, the most popular method is the bit-flipping technique, where the L1-norm maximization is viewed as a combinatorial problem over the binary field. Recently, we proposed exhaustive, but faster algorithm based on two-dimensional Jacobi rotations that also offer high accuracy. In this paper, we develop a novel variant of this method that uses three-dimensional rotations and quaternion algebra. Our experiments show that the proposed approach offers higher accuracy than other approximate algorithms, but at the expense of the additional computational cost. However, for large datasets, the cost is still lower than that of the bit-flipping technique.
15
Content available remote An Innovative Drastic Metric for Ranking Similarity in Decision-Making Problems
EN
In this paper, we propose a novel approach to distance measurement for rankings, introducing a new metric that exhibits exceptional properties. Our proposed distance metric is defined within the interval of 0 to 1, ensuring a compact and standardized representation. Importantly, we demonstrate that this distance metric satisfies all the essential criteria to be classified as a true metric. By adhering to properties such as non-negativity, identity of indiscernibles, symmetry, and the crucial triangle inequality, our proposed distance metric provides a robust and reliable approach for comparing rankings in a rigorous and mathematically sound manner. Finally, we compare our new metric with distances such as Hamming distance, Canberra distance, Bray-Curtis distance, Euclidean distance, Manhattan distance, and Chebyshev distance. By conducting simple experiments, we assess the performance and advantages of our proposed metric in comparison to these established distance measures. Through these comparisons, we demonstrate the superior properties and capabilities of our new drastic weighted similarity distance for accurately capturing the dissimilarities and similarities between rankings in the decision-making domain.
16
Content available remote Algorithmic Handling of Time Expanded Networks
EN
Time Expanded Networks, built by considering the nodes of a base network over some time space, are powerful tools for the formulation of problems involving synchronization mechanisms. Those mechanisms may for instance be related to the interaction between resource production and consumption or between routing and scheduling. Still, in most cases, deriving algorithms from those formulations is difficult, due to both the size of resulting network structure and the fact that reducing this size through rounding techniques tends to induce uncontrolled essor propagation. We address here this algorithmic issue, while proposing a generic decomposition scheme which works by first skipping the temporal dimension of the problem and next expanding resulitng projected solution into a full solution of the problem set on the time expanded network.
EN
A new precise, selective and reliable reversed phase high performance liquid chromatographic (RP-HPLC) method has been developed and validated for the determination of Methyl paraben sodium (MPS) and Propyl paraben sodium (PPS) (preservatives) in Iron protein succinylate syrup. Optimized conditions were; Methanol: Water (65: 35) as mobile phase, UV/Vis detector at the wavelength of 254 nm and flow rate was set at 1.3 ml min⁻¹. By applying the set of conditions, separation of components was carried out in less than 7 min for both the analytes. The method was validated according to International conference of Harmonization (ICH) guidelines and the analytical characteristic parameters of validation included specificity, limit of detection (LOD), limit of quantification, linearity, accuracy, precision and robustness were evaluated. The calibration curve was found to be linear in the range of 0.045 mg mL⁻¹ to 0.075 mg mL⁻¹ for Methyl paraben sodium and 0.015 mg mL⁻¹ to 0.025 mg mL⁻¹ for propyl paraben sodium with a correlation coefficient r2 > 0.999. Accuracy; reported as percentage recovery was found to be in the range of 98.71%–101.64% for Methyl paraben sodium and 99.85%–101.47% for Propyl paraben sodium at 80%, 100% and 120% concentration for both the analytes. The proposed method was found to be precise and robust when evaluated by variations in wavelength, mobile phase composition, temperature and analyst. The limit of detection (LOD) was found 0.001 mg mL⁻¹ (3 ppm) for Methyl paraben sodium and 0.001 mg mL⁻¹ (1 ppm) for Propyl paraben sodium.
18
Content available remote Robust estimation of the spherical normal distribution
EN
This paper develops a new family of estimators, the minimum density power divergence estimators, for the parameters of the Spherical Normal Distribution. This family contains the maximum likelihood estimator as a particular case. The robustness is empirically illustrated through a Monte Carlo simulation study and two biological numerical examples. Tools needed to implement these methods are also provided.
PL
Artykuł przedstawia nową rodzinę estymatorów parametrów sferycznego rozkładu normalnego minimalnej dywergencji. Ta rodzina obejmuje estymator największej wiarygodności jako przypadek szczególny. Odporność tych estymatorów jest zilustrowana empirycznie przez badanie symulacyjne Monte Carlo. Zamieszczone przykłady dla danych rzeczywistych dotyczą zagadnień z biologii. Pokazano również narzędzia potrzebne do wdrożenia tych metod.
EN
The paper experimentally and theoretically considers the issues of assessing the robustness and resistance to progressive collapse of a flat slab with a sudden removal of the central support. The results of testing two scale models of a fragment of a flat ceiling in the case of removal of the central support under static (specimen FS-1) and dynamic (specimen FS-2) loading are presented and analyzed. A theoretical approach to the quantitative assessment of robustness was tested, which is based on the provisions of the energy balance of a damaged structural system in an accidental design situation.
PL
Planowanie leczenia w radioterapii protonowej w niektórych aspektach będzie różnić się od planowania z zastosowaniem wiązek fotonowych ze względu na różnice we właściwościach fizycznych obu tych wiązek. Występujący dla wiązek protonowych pik Bragga daje możliwość ograniczenia dawki za zmianą nowotworową, ale wpływa również na strategię wyboru wiązek, która będzie bezpieczna dla pacjenta. Pojawiają się również inne metody obliczeń dawki, techniki napromieniania pacjenta oraz optymalizacji planów leczenia. Narzędzia obliczeniowe rozkładów dawki stosowane w praktyce klinicznej muszą zapewniać dużą dokładność i zgodność obliczeń z danymi eksperymentalnymi w celu minimalizacji niepewności zasięgu wiązki protonowej. Obecne systemy do planowania leczenia w większości bazują na algorytmach analitycznych, ale pojawiają się również systemy oferujące symulacje transportu cząstek oparte o metody Monte Carlo. Dodatkowo kody transportu promieniowania pozwalają uwzględnić wpływ innych wielkości fizycznych na rozkład dawki, w tym względną skuteczność biologiczną wiązki protonowej. W niniejszej pracy zostaną przedstawione najważniejsze aspekty w planowaniu leczenia wiązkami protonowymi wraz z dyskusją aktualnych problemów i strategii ich rozwiązywania.
EN
Treatment planning in proton radiotherapy, in some aspects, will differ from planning with photon beams, due to differences in the physical properties of these two beams. The Bragg peak that occurs for proton beams provides an opportunity to reduce the dose behind the tumor, but it also affects the strategy for selecting beams that will be safe for the patient. Other methods of dose calculations, patient irradiation techniques and optimization of treatment plans are also appearing. Dose distribution calculation tools used in clinical practice must ensure high accuracy and compatibility of calculations with experimental data to minimize proton beam range uncertainty. Current treatment planning systems are mostly based on analytical algorithms, but systems offering particle transport simulations based on Monte Carlo methods are also being developed. In addition, radiation transport codes make it possible to take into account the influence of other physical quantities on dose distribution, including the relative biological effectiveness of the proton beam. This review will present the most important aspects in proton treatment planning, along with a discussion of current problems and strategies for solving them.
first rewind previous Strona / 6 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.