Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 106

Liczba wyników na stronie
first rewind previous Strona / 6 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  complexity
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 6 next fast forward last
EN
Purpose: identification of effective mechanisms of organizational adaptation under high uncertainty in light of the assumptions of the complexity theory. Design/methodology/approach: The approach adopted involves literature review, including an interdisciplinary ground of complexity theory. Findings: The content of the article includes the conceptualization of patterns as schemata at the level of an organization, recognition of the relationship between schemata and organizational routines as an area subject to shaping, and identification of the dynamics of routines depending on the degrees of uncertainty. Research limitations/implications: The results of the study are presented in the form of theoretical framework enabling further testing. Originality/value: Conceptualization of the contextual nature of routines depending on the degree of uncertainty as a mechanism of change and stability in the organization.
EN
Previous studies on the relationship between environmental regulation (ER) and technological innovation efficiency of defence industry (TIE-DI) mainly focus on variability and complexity, while few empirical studies have incorporated environmental flexibility into models, and most of them are based on questionnaires. Therefore, this paper takes environmental regulation and environmental policy complexity (EPC) as the entry point, so as to discuss the feasibility of improving the technological innovation efficiency of China’s defence industry enterprises (CDI), which aims to empirically test the mechanism of ER and EPC on improving the efficiency of applied technological innovation of China’s defence industry enterprises. The research conclusion provides sufficient theoretical basis and empirical support for strengthening the technical innovation efficiency support, standardising the market order and the market leading, establishing the information disclosure mechanism and improving the internal control of industrial enterprises.
EN
The main objective of the research work was to identify the dimensions of complexity and study the relationship between these defined dimensions in the industrial automation sector. To achieve these objectives in the study, there was assumed the following major hypothesis: With the increasing role of dynamic cross-section of the complexity there is growing importance of relationship dimension for competitive advantage. In the study there were diagnosed four dimensions of complexity. Existence of the relationship between these four identified dimensions of complexity occurred by the use of the Fisher’s exact test, which is a variant of the test of independence χ2. Furthermore, there were calculated V-Cramer factors to estimate the intensity of the above-mentioned relationship between analyzed dimensions. The research discovered that the three out of four dimensions such as the number of elements, variety of elements and uncertainty depend on the last dimension of complexity which is the relationship between elements. In the turbulent environment there is a growing importance of the relationship dimension. It forms competitive advantage and is a key condition of success in creating a new type of modern enterprise strategy that occurs within complexity management in the industrial automation sector.
4
Content available remote Structural Liveness of Immediate Observation Petri Nets
EN
We look in detail at the structural liveness problem (SLP) for subclasses of Petri nets, namely immediate observation nets (IO nets) and their generalized variant called branching immediate multi-observation nets (BIMO nets), that were recently introduced by Esparza, Raskin, and Weil-Kennedy. We show that SLP is PSPACE-hard for IO nets and in PSPACE for BIMO nets. In particular, we discuss the (small) bounds on the token numbers in net places that are decisive for a marking to be (non)live.
EN
Background: This paper has the central aim to provide an analysis of increases of system complexity in the context of modern industrial information systems. An investigation and exploration of relevant theoretical frameworks is conducted and accumulates in the proposition of a set of hypotheses as an explanatory approach for a possible definition of system complexity based on information growth in industrial information systems. Several interconnected sources of technological information are investigated and explored in the given context in their functionality as information transferring agents, and their practical relevance is underlined by the application of the concepts of Big Data and cyber-physical, cyber-human and cyber-physical-cyber-human systems. Methods: A systematic review of relevant literature was conducted for this paper and in total 85 sources matching the scope of this article, in the form of academic journals and academic books of the mentioned academic fields, published between 2012 and 2019, were selected, individually read and reviewed by the authors and reduced by careful author selection to 17 key sources which served as the basis for theory synthesis. Results: Four hypotheses (H1-H4) concerning exponential surges of system complexity in industrial information systems are introduced. Furthermore, first foundational ideas for a possible approach to potentially describe, model and simulate complex industrial information systems based on network, agent-based approaches and the concept of Shannon entropy are introduced. Conclusion: Based on the introduced hypotheses it can be theoretically indicated that the amount information aggregated and transferred in a system can serve as an indicator for the development of system complexity and as a possible explanatory concept for the exponential surges of system complexity in industrial information systems.
EN
Background: Complexity has been an interesting research area for academics and businesses practices due to its relevance in determining the best practices and impacts to the supply network. The contribution of this research extend to the literature and put forward solutions for the industry since previous studies are neglecting whole network relations, which is highlighted as source of supply network complexity (SNC). Specifically, this research extends to enriching the literature and recommending solutions to the industry players since previous studies are neglecting important Inter Firm Relation (IFR) elements, formal inter-firm relation (FIFR) and informal inter-firm relations (IIFR), which are highlighted as a pertinent factor in this research. In this study, the Social Network Analysis (SNA) method was adopted to develop valid attribute for the measurement process and the embeddedness theory was used to evaluate the interrelationships among the proposed attributes. This study found that FIFR and IIFR have different effects towards the formation of SNS and consequently towards SNC. Finally, theoretical and industrial implications are also discussed. Methods: Traditional statistical tools focus on attributes of phenomenon as determinants for occurrence of economic payoff. Thus, traditional statistical analysis is not suitable to measure the impact of relations or connections among member of network contributing to network complexity. For the purpose of this research, the Social Network Analysis methodology was adopted to collect, analyse and interpret network data. Network survey was conducted to collect relational data among members of maritime industry supply network. Network data was analysed and interpreted using specialized social network program i.e. UCINET and NETDRAW. Statistical network measures such as centralization and density was applied to determine the relations between network complexity and network relations. Results: The findings of this study indicate that Inter Firm Relation (IFR), formal inter-firm relation (FIFR) and informal inter-firm relations (IIFR), which are highlighted as a pertinent factor in this research, have different effects towards the formation of SNS and consequently towards SNC. Conclusion: The results of the statistical network analysis indicate that, network complexity exist in different forms and structure, depending on the type of relations that formed the network in the first place. Consequently, what these mean are, managing network requires different types of resource and strategy as the level of the network complexity are different at different states of connectivity.
PL
Wstęp: Kompleksowość jest interesującym tematem badań naukowych w połączeniu z tematem stosowania dobrych praktyk oraz jego wpływu na funkcjonowanie łańcucha dostaw. Praca skupia się na obszarze przemysłu, gdyż jest on stosunkowo mało opracowany w ostatnio publikowanych pracach, gdzie są często pomijane aspekty zależności sieciowych, wpływających na kompleksowość łańcucha dostaw (SNC). W szczególności praca skupia się na elementach wewnętrznych relacji firmowych (IFR), formalnych relacjach wewnątrzfirmowych (FIRF) oraz nieformalnych relacjach wewnątrzfirmowych (IIFR), które są szczególnie potraktowane w prezentowanej pracy. W pracy zastosowano metodę analizy sieci socjalnych (SNA) w zmodyfikowanej formie dla oceny procesu oraz teorii zagnieżdżenia, które zostały użyte do oceny relacji wewnętrznych. W pracy stwierdzono, że FIFR i IIFR mają różny wpływ na formowanie SNS oraz w konsekwencji na kształt SNC. Poddano dyskusji również teoretyczne i przemysłowe implikacje. Metody: Tradycyjne narzędzia statystyczne koncentrują się na wpływu czynników na ekonomiczny wynik. Dlatego też tradycyjna analiza statystyczna nie jest wystarczającą dla pomiaru wpływu relacji i powiązań między członkami sieci na kompleksowość tej sieci. W celu tej oceny, zastosowano metodologię SNA (Social Network Analysis), do zbierania, analizy i interpretacji danych. Dane zebrano na podstawie ankiety pomiędzy członkami łańcucha dostaw obszaru portów morskich. Zebrane dane zostały poddane analizie w specjalistycznym programie UCINET oraz NETDRAW. Do oceny relacji sieciowych oraz kompleksowości zostały użyte wskaźniki statystyczne takie jak centralizacja i gęstość. Wyniki: Wyniki badań wskazały, że relacje wewnątrzfirmowe (IFR), formalne relacje wewnątrzfirmowe (FIRF) oraz nieformalne relacje wewnątrzfirmowe (IIFR), uwzględnione w pracy jako istotne, mają różny wpływ na kształtowanie się SNS oraz w konsekwencji na SNC. Wnioski: Wyniki analizy statystycznej wskazują, że kompleksowość sieci występuje w różnej formie i strukturze, w zależności od typu relacji, kształtującej daną sieć. W konsekwencji, różnego rodzaju zasoby i strategie jak i poziom kompleksowości sieci są różne w różnych etapach połączeń.
EN
We derive well-understood and well-studied subregular classes of formal languages purely from the computational perspective of algorithmic learning problems. We parameterise the learning problem along dimensions of representation and inference strategy. Of special interest are those classes of languages whose learning algorithms are necessarily not prohibitively expensive in space and time, since learners are often exposed to adverse conditions and sparse data. Learned natural language patterns are expected to be most like the patterns in these classes, an expectation supported by previous typological and linguistic research in phonology. A second result is that the learning algorithms presented here are completely agnostic to choice of linguistic representation. In the case of the subregular classes, the results fall out from traditional model-theoretic treatments of words and strings. The same learning algorithms, however, can be applied to model-theoretic treatments of other linguistic representations such as syntactic trees or autosegmental graphs, which opens a useful direction for future research.
EN
Research on cross-linguistic differences in morphological paradigms reveals a wide range of variation on many dimensions, including the number of categories expressed, the number of unique forms, and the number of inflectional classes. However, in an influential paper, Ackerman and Malouf (2013) argue that there is one dimension on which languages do not differ widely: in predictive structure. Predictive structure in a paradigm describes the extent to which forms predict each other, called i-complexity. Ackerman and Malouf (2013) show that although languages differ according to measure of surface paradigm complexity, called e-complexity, they tend to have low i-complexity. They conclude that morphological paradigms have evolved under a pressure for low i-complexity. Here, we evaluate the hypothesis that language learners are more sensitive to i-complexity than e-complexity by testing how well paradigms which differ on only these dimensions are learned. This could result in the typological findings Ackerman and Malouf (2013) report if even paradigms with very high e-complexity are relatively easy to learn, so long as they have low i-complexity. First, we summarize a recent work by Johnson et al. (2020) suggesting that both neural networks and human learners may actually be more sensitive to e-complexity than i-complexity. Then we build on this work, reporting a series of experiments which confirm that, indeed, across a range of paradigms that vary in either e- or icomplexity, neural networks (LSTMs) are sensitive to both, but show a larger effect of e-complexity (and other measures associated with size and diversity of forms). In human learners, we fail to find any effect of i-complexity on learning at all. Finally, we analyse a large number of randomly generated paradigms and show that e- and i-complexity are negatively correlated: paradigms with high e-complexity necessarily show low i-complexity. We discuss what these findings might mean for Ackerman and Malouf’s hypothesis, as well as the role of ease of learning versus generalization to novel forms in the evolution of paradigms.
9
Content available remote Coverability, Termination, and Finiteness in Recursive Petri Nets
EN
In the early two-thousands, Recursive Petri nets have been introduced in order to model distributed planning of multi-agent systems for which counters and recursivity were necessary. Although Recursive Petri nets strictly extend Petri nets and context-free grammars, most of the usual problems (reachability, coverability, finiteness, boundedness and termination) were known to be solvable by using non-primitive recursive algorithms. For almost all other extended Petri nets models containing a stack, the complexity of coverability and termination are unknown or strictly larger than EXPSPACE. In contrast, we establish here that for Recursive Petri nets, the coverability, termination, boundedness and finiteness problems are EXPSPACE-complete as for Petri nets. From an expressiveness point of view, we show that coverability languages of Recursive Petri nets strictly include the union of coverability languages of Petri nets and context-free languages. Thus we get a more powerful model than Petri net for free.
EN
The state detection problem and fault diagnosis/prediction problem are fundamental topics in many areas. In this paper, we consider discrete-event systems (DESs) modeled by finite-state automata (FSAs). There exist plenty of results on decentralized versions of the latter problem but there is almost no result for a decentralized version of the former problem. In this paper, we propose a decentralized version of strong detectability called co-detectability which means that if a system satisfies this property, for each generated infinite-length event sequence, in at least one location the current and subsequent states can be determined by observations in the location after a common observation time delay. We prove that the problem of verifying co-detectability of deterministic FSAs is coNP-hard. Moreover, we use a unified concurrent-composition method to give PSPACE verification algorithms for co-detectability, co-diagnosability, and co-predictability of FSAs, without any assumption on or modification of the FSAs under consideration, where co-diagnosability is first studied by [Debouk & Lafortune & Teneketzis 2000], co-predictability is first studied by [Kumar & Takai 2010]. By our proposed unified method, one can see that in order to verify co-detectability, more technical difficulties will be met compared with verifying the other two properties, because in co-detectability, generated outputs are counted, but in the latter two properties, only occurrences of events are counted. For example, when one output was generated, any number of unobservable events could have occurred. PSPACE-hardness of verifying co-diagnosability is already known in the literature. In this paper, we prove PSPACE-hardness of verifying co-predictability.
PL
Złożoność oraz zróżnicowanie przestrzeni miejskiej są wartością, decydują o jej jakości szczególnie w obszarach śródmiejskich i centralnych. Dzięki zróżnicowaniu możliwe jest zadziałanie mechanizmów synergii oraz symbiozy. Dla sukcesu projektowania urbanistycznego kluczowa jest pełna i weryfikowalna kwantyfikacja oraz waloryzacja złożoności i różnorodności, a więc warunków dla symbiozy oraz wystąpienia efektu synergii. Graficzny zapis relacji przestrzennych, typowy dla prac urbanistycznych z wykorzystaniem grafów, wydaje się obiecujący dla zobiektywizowania efektów.
EN
Complexity and diversitivness (“mixed-use” nowadays) of urban space are crucial for a quality of that, especially in downtowns and city centers. Synergy and symbiosis effects could be obtained only when complexity, complementarity and diversitivness are established in urban space. The successful urban planning is possible only in the case, complex, verifiable quantification and valorization of complexity and diversitivness, (condition “sine qua non” for symbiosis and synergy effects) is applicable. Graphic record of space relations surveying complexity and diversitivness, typical for urban designing works, done with elements of graph transcription seems to be successful.
EN
The material of a part or component has a decisive impact on the complexity and laboriousness of assembly process. Thus, solid, fragile and flexible parts and components are all handled in a different way. The present paper classifies some of the most common types of materials parts are usually made of, while also analyzing their impact on the assembly process. In mechanical engineering, solid parts are most often used, given that these do not require special measures in known assembly orientation and handling techniques. However, such solid materials pose several challenges as well, and this mainly due to their several specific shapes, which means a specific problem in gripping and especially in their orientation during the assembly process. In this regard, the paper also addresses with one of the most important and assembly-troublesome properties of these solid parts, which the degree of symmetry. The degree of symmetry of solid parts has a direct impact on the complexity and laboriousness of their orientation in assembly. The last part of the paper focuses on the theoretical basis for the calculation of complexity and laboriousness in the assembly of parts.
13
Content available remote An efficient algorithm for 2-dimensional pattern matching problem
EN
Pattern matching is the area of computer science which deals with security and analysis of data. This work proposes two 2D pattern matching algorithms based on two different input domains. The first algorithm is for the case when the given pattern contains only two symbols, that is, binary symbols 0 and 1. The second algorithm is in the case when the given pattern contains decimal numbers, that is, the collection of symbols between 0 and 9. The algorithms proposed in this manuscript convert the given pattern into an equivalent binary or decimal number, correspondingly find the cofactors of the same dimension and convert these cofactors into numbers if a particular cofactor numer matches indicate the matching of the pattern. Furthermore, the algorithm is enhanced for decimal numbers. In the case of decimal numbers, each row of the pattern is changed to its decimal equivalent, and then, modulo with a suitable prime number changes the decimal equivalent into a number less than the prime number. If the number mismatched pattern does not exist, the complexity of the proposed algorithm is very low as compared to other traditional algorithms.
EN
The awareness of the growing importance of the complexity in creating a new type of a modern enterprise strategy and in introducing changes within planning, control and organizational structures contributed to undertaking studies on relationships occurring between the complexity of a modern enterprise and its flexibility in the sector of industrial automation, as well as filling the gap relating to the cognitive impact of poor complexity management on the flexibility of the company. The main objective of the research work is to check whether there is an important relationship between the complexity of the business and its flexibility in the industrial automation sector. Quantification of the relationship between these two quantities – the complexity and flexibility – happened by the use of the Multidimensional Correspondence Analysis (MCA) and Perceptual Maps. The study which has been carried out indicated that the flexibility and complexity functions in the enterprise management rise, however, the knowledge of these issues is highly insufficient. The research discovered that the obstacles which hamper striking a balance between the flexibility and complexity in their advanced stages exert a devastating impact on the quality of the process management. Reducing the flexibility at its higher levels generates a context in which the market risk is enhanced. Companies characterised by improper flexibility management bear higher workforce costs and their processes of decision-making last longer. Methodical and systematized study of flexibility and complexity will decrease the destructive influence of the interaction between these two categories.
EN
Yoga is known as a type of exercise that combines physical, mental and spiritual aspects. There has not been much research on the postural control in various yoga poses. The aim of this study was to examine CoP regularity in both yoga instructors and novices during the performance of four yoga poses, and to verify the sensitivity of linear and nonlinear methods for assessing postural stability. Methods: The dynamic characteristics of CoP fluctuations were examined using linear and nonlinear methods on a group of 22 yoga instructors (Y) and 18 age-matched non-practitioners of yoga (NY). The study involved maintaining a balance for 20 seconds while performing four yoga poses. Results: Conventional analysis of CoP trajectories showed that NY and Y exhibited similar control of postural sway, as indicated by similar CoP path-length and area values observed in both groups. These results suggest that the special balance yoga training received by Y may not have an impact on less challenging balance conditions, such as the poses used in this experiment. Interestingly, nonlinear dynamical analysis of CoP showed that Y exhibited less CoP regularity and more complex signal than NY, as evidenced by higher values of sample entropy and fractal dimension. Conclusions: The results shed light on the surplus values of CoP trajectories in the nonlinear dynamical analysis to gain further insight into the mechanisms involved in posture control.
EN
In this work, we consider two families of incidence problems, C1 and C2, which are related to real numbers and countable subsets of the real line. Instances of problems of C1 are as follows: given a real number x, pick randomly a countable set of reals A hoping that x ∈ A, whereas instances of problems of C2 are as follows: given a countable set of reals A, pick randomly a real number x hoping that x /∈ A. One could arguably defend that, at least intuitively, problems of C2 are easier to solve than problems of C1. After some suitable formalization, we prove (within ZFC) that, on one hand, problems of C2 are, indeed, at least as easy to solve as problems of C1. On the other hand, the statement “Problems of C1 have the exact same complexity of problems of C2” is shown to be an equivalent of the Continuum Hypothesis.
17
Content available remote Zagadnienie złożoności w teorii architektury końca XX wieku
PL
Postmodernizm podniósł rangę pojęcia złożoności, które stało się kluczowe dla teorii architektury XX wieku. Niezależnie przez pół stulecia rozwijały się interdyscyplinarne nauki o złożoności. Praca śledzi związki obu dziedzin, zwracając uwagę m.in. na wspólne źródła (teorię systemów złożonych Herberta A. Simona, z którego prac korzystali Robert Venturi i Christopher Alexander) oraz teoretyków architektury wyraźnie czerpiących z nauk o złożoności (m.in. Lucien Kroll, Nikos Salingaros).
EN
Postmodernism established the term ‘complexity’ as crucial for the twentieth century theory of architecture. Interdisciplinary complexity sciences developed independently throughout half of the century. The article tracks down relationships between the two, highlighting, among others, common genesis (Herbert A. Simon’s complex systems theory, adopted by Robert Venturi and Christopher Alexander) and theorists depending heavily on complexity sciences (eg. Lucien Kroll or Nikos Salingaros).
EN
Let G be a graph with vertex set V(G), δ (G) minimum degree of G and [formula]. Given a nonempty set M ⊆ V(G) a vertex v of G is said to be k-controlled by M if [formula] where δM(v) represents the number of neighbors of v in M. The set M is called an open k-monopoly for G if it fc-controls every vertex v of G. In this short note we prove that the problem of computing the minimum cardinality of an open k-monopoly in a graph for a negative integer k is NP-complete even restricted to chordal graphs.
EN
Maintenance of process plants requires application of good maintenance practice due to a great level of complexity. From a plant maintenance point of view, the most significant activity is turnaround, an activity carried out through project task with long planning process period and very short execution period, which makes it one of the most complex projects of maintenance in general. It is exactly this kind of maintenance that is based on multidisciplinarity which has to be implemented through the system of quality management on all levels of maintenance management. This paper defines the most significant factors determining the process of turnaround projects quality management and its efficiency. Such relation is observed through moderating influence of complexity on process management efficiency in the turnaround project. The empirical research was conducted based on the survey of turnaround project participants in five refineries in Croatia, Italy, Slovakia and Hungary. For exploring the influence of research variables testing of the target relation is carried out by applying logistical regression. Research results confirm the significance of complexity as variable that significantly contributes to the project performance through the moderating influence on success of the project, as well as the influence of an efficient management on a plant turnaround project key results. Beside theoretical indications, practical implications that arise from this research study mainly refers to management process of the industrial plant maintenance project.
EN
This paper presents a novel low-complexity soft demapping algorithm for two-dimensional non-uniform spaced constellations (2D-NUCs) and massive order one-dimensional NUCs (1D-NUCs). NUCs have been implemented in a wide range of new broadcasting systems to approach the Shannon limit further, such as DVB-NGH, ATSC 3.0 and NGB-W. However, the soft demapping complexity is extreme due to the substantial distance calculations. In the proposed scheme, the demapping process is classified into four cases based on different quadrants. To deal with the complexity problem, four groups of reduced subsets in terms of the quadrant for each bit are separately calculated and stored in advance. Analysis and simulation prove that the proposed demapper only introduces a small penalty under 0.02dB with respect to Max-Log-MAP demapper, whereas a significant complexity reduction ranging from 68.75% to 88.54% is obtained.
first rewind previous Strona / 6 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.