Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 28

Liczba wyników na stronie
first rewind previous Strona / 2 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  software testing
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 2 next fast forward last
EN
In organization with applied agile software development, where software life cycles are very short (i.e. two weeks), changes to the software are very frequent. Usually resources are scarce – power is expensive, test lines are constantly occupied, and hardware parts must be booked only for regression testing. In this perspective, regression testing might introduce a lot of unnecessary overhead. By comparing statistical methods and related to unsupervised machine learning methods, we discovered that due to a uniform nature of code changes, one can easily achieve 90% of bug prediction accuracy while reducing the original testing queue by 25%.
PL
W organizacji działającej w oparciu o zwinne podejście do rozwoju oprogramowania, gdzie cykle życia oprogramowania są bardzo krótkie (np. dwa tygodnie), zmiany w oprogramowaniu są bardzo częste. Zazwyczaj zasoby są ograniczone — prąd jest drogi, linie testowe są stale zajęte, a części sprzętu muszą być zarezerwowane tylko do testów regresyjnych. W tej perspektywie testy regresyjne mogą wprowadzić wiele niepotrzebnych kosztów całkowitych. Porównując metody statystyczne oraz nienadzorowanego uczenia maszynowego odkryliśmy, że dzięki jednolitej naturze zmian w kodzie, można łatwo osiągnąć 90% dokładności przewidywania błędów przy jednoczesnym zmniejszeniu pierwotnej kolejki testów o 25%.
EN
In this paper a modification of Mike Cohn's test pyramid is described for adaptation during testing in distributed information processing systems which allows expanding the possibilities of testing and applying the features of such systems. Recommendations for further use of the mechanisms of modified Mike Cohn's pyramid are developed. The method of testing the user interface software of the nodes of a distributed system was improved to differ from the existing techniques by including a mechanism of simulation of its operation to allow testing of individual components of the system interface. It is shown that in comparison with end-to-end testing of user interfaces the advantages of using the mechanisms of user interface test simulators allow reducing the time spent on testing any UI service. The time is reduced by decreasing the number of simultaneous user interface services. With a small number of nodes, end-to-end testing of user interfaces is faster than simulation testing of the same user interfaces. As the number of nodes increases, the time required to test the services of a distributed system by simulation tests becomes shorter than the time required to test the same system by a traditional method.
EN
Context: Automated acceptance testing validates a product’s functionality from the customer’s perspective. Text-based automated acceptance tests (AATs) have gained popularity because they link requirements and testing. Objective: To propose and evaluate a cost-effective systematic reuse process for automated acceptance tests. Method: A systematic approach, method engineering, is used to construct a systematic reuse process for automated acceptance tests. The techniques to support searching, assessing, adapting the reusable tests are proposed and evaluated. The constructed process is evaluated using (i) qualitative feedback from software practitioners and (ii) a demonstration of the process in an industry setting. The process was evaluated for three constraints: performance expectancy, effort expectancy, and facilitating conditions. Results: The process consists of eleven activities that support development for reuse, development with reuse, and assessment of the costs and benefits of reuse. During the evaluation, practitioners found the process a useful method to support reuse. In the industrial demonstration, it was noted that the activities in the solution helped in developing an automated acceptance test with reuse faster than creating a test from scratch i.e., searching, assessment and adaptation parts. Conclusion: The process is found to be useful and relevant to the industry during the preliminary investigation.
EN
Purpose: The main purpose of the research is to examine the suitability of exploratory tests in the software testing process. Design/methodology/approach: An experiment, carried out for the sake of this study, consisted of two parts. First, a test was performed, and in the second part a survey was conducted, which allowed for the comparison of exploratory and test-based tests. Findings: The results of the tests indicated a slightly lower effectiveness of the exploratory approach, which may have been caused by the conditions of the experiment: the choice of the tested software, short duration of test sessions, participants lacking knowledge about the investigated software and experience in performing exploratory tests. Originality/value: Despite the weaker results obtained, the exploratory tests proved useful, as evidenced by the detection of distinctive errors, not found during tests based on test cases. In the survey, 90% of respondents confirmed the use of formalized test approach, based on test cases, while just over a half (57%) indicated having experience in conducting exploratory tests. Testers considered both approaches useful, addressing greater need for conducting formalized tests using test cases. Results included in the research allowed to indicate the qualities and shortcomings of the exploratory approach to software testing.
5
Content available remote Finding Minimum Locating Arrays Using a CSP Solver
EN
Combinatorial interaction testing is an efficient software testing strategy. If all interactions among test parameters or factors needed to be covered, the size of a required test suite would be prohibitively large. In contrast, this strategy only requires covering t-wise interactions where t is typically very small. As a result, it becomes possible to significantly reduce test suite size. Locating arrays aim to enhance the ability of combinatorial interaction testing. In particular, (1̅ , t) -locating arrays can not only execute all t-way interactions but also identify, if any, which of the interactions causes a failure. In spite of this useful property, there is only limited research either on how to generate locating arrays or on their minimum sizes. In this paper, we propose an approach to generating minimum locating arrays. In the approach, the problem of finding a locating array consisting of N tests is represented as a Constraint Satisfaction Problem (CSP) instance, which is in turn solved by a modern CSP solver. The results of using the proposed approach reveal many (1̅ , t) -locating arrays that are smallest known so far. In addition, some of these arrays are proved to be minimum.
EN
Test case prioritization (TCP) has been considerably utilized to arrange the implementation order of test cases, which contributes to improve the efficiency and resource allocation of software regression testing. Traditional coverage-based TCP techniques, such as statement-level, method/function-level and class-level, only leverages program code coverage to prioritize test cases without considering the probable distribution of defects. However, software defect data tends to be imbalanced following Pareto principle. Instinctively, the more vulnerable the code covered by the test case is, the higher the priority it is. Besides, statement-level coverage is a more fine-grained method than function-level coverage or class-level coverage, which can more accurately formulate test strategies. Therefore, we present a test case prioritization approach based on statement software defect prediction to tame the limitations of current coverage-based techniques in this paper. Statement metrics in the source code are extracted and data pre-processing is implemented to train the defect predictor. And then the defect detection rate of test cases is calculated by combining the prioritization strategy and prediction results. Finally, the prioritization performance is evaluated in terms of average percentage faults detected in four open source datasets. We comprehensively compare the performance of the proposed method under different prioritization strategies and predictors. The experimental results show it is a promising technique to improve the prevailing coverage-based TCP methods by incorporating statement-level defect-proneness. Moreover, it is also concluded that the performance of the additional strategy is better than that of max and total, and the choice of the defect predictor affects the efficiency of the strategy.
PL
Metodę priorytetyzacji przypadków testowych (TCP) wykorzystuje się powszechnie do ustalania kolejności implementacji przypadków testowych, co przyczynia się do poprawy wydajności i alokacji zasobów w trakcie testowania regresyjnego oprogramowania. Tradycyjne techniki TCP oparte na pokryciu na poziomie instrukcji, metody/funkcji oraz klasy, wykorzystują pokrycie kodu programu tylko w celu ustalenia priorytetów przypadków testowych, bez uwzględnienia prawdopodobnego rozkładu błędów. Jednak dane o błędach oprogramowania są zwykle niezrównoważone zgodnie z zasadą Pareto. Instynktownie, im bardziej wrażliwy jest kod pokryty przypadkiem testowym, tym wyższy jest jego priorytet. Poza tym, pokrycie na poziomie instrukcji jest bardziej szczegółową metodą niż pokrycie na poziomie funkcji lub pokrycie na poziomie klasy, które mogą dokładniej formułować strategie testowe. Dlatego w artykule przedstawiamy podejście do priorytetyzacji przypadków testowych oparte na prognozowaniu błędów instrukcji oprogramowania, które pozwala zmniejszyć ograniczenia obecnych technik opartych na pokryciu. Wyodrębniono metryki instrukcji w kodzie źródłowym i zaimplementowano wstępne przetwarzanie danych w celu nauczania predyktora błędów. Następnie obliczono wskaźnik wykrywania błędów w przypadkach testowych poprzez połączenie strategii priorytetyzacji i wyników prognozowania. Wreszcie, oceniono wydajność ustalania priorytetów pod względem średnich procentowych błędów wykrytych w czterech zestawach danych typu open source. Kompleksowo porównano wydajność proponowanej metody w ramach różnych strategii ustalania priorytetów i predyktorów. Wyniki eksperymentów pokazują, że jest to obiecująca technika poprawy dominujących metod TCP opartych na pokryciu poprzez włączenie podatności na błędy na poziomie instrukcji. Ponadto stwierdzono również, że strategia dodatkowa cechuje się lepszą wydajnością niż strategie max i total, a wybór predyktora błędów wpływa na skuteczność strategii.
EN
Software testing is a very broad term that includes a wide variety of topics. They range from technical like testing techniques and measurements, to more organizational like planning and management of testing. Ability to plan, design and create efficient tests is the most critical ability for any good tester. The paper presents Kungfu Testing, which is a testing approach based on advice and best practices advocated by experts in the field of testing. The method is intended to provide a step-by-step instruction of managing testing activities in a project environment. The presented approach was designed to work with and complement the agile development methodologies due to their widespread use and popularity.
EN
Description : Software testing benefits from the usage of Knowledge Management (KM) methods and principles. Thus, there is a need to adopt KM to the software testing core processes and attain the benefits that it provides in terms of cost, quality, etc. Aim : To investigate the usage and implementation of KM for software testing. The major objectives include 1. To identify various software testing aspects that receive more attention while applying KM. 2. To analyse multiple software testing techniques, i.e. test design, test execution and test result analysis and highlight KM involvement in these. 3. To gather challenges faced by industry due to the lack of KM initiatives in software testing. Method : A Systematic Literature Review (SLR) was conducted utilizing the guidelines for snowballing reviews by Wohlin. The identified studies were analysed in relation to their rigor and relevance to assess the quality of the results. Results : The initial resulting set provided 4832 studies. From these, 35 peer-reviewed papers were chosen among which 31 are primary, and 4 are secondary studies. The literature review results indicated nine testing aspects being in focus when applying KM within various adaptation contexts and some benefits from KM application. Several challenges were identified, e.g., improper selection and application of better-suited techniques, a low reuse rate of software testing knowledge, barriers in software testing knowledge transfer, no possibility to quickly achieve the most optimum distribution of human resources during testing, etc. Conclusions : The study brings supporting evidence that the application of KM in software testing is necessary, e.g., to increase test effectiveness, select and apply testing techniques. The study outlines the testing aspects and testing techniques that benefit their users.
EN
Mutation testing – a fault-based technique for software testing – is a computationally expensive approach. One of the powerful methods to improve the performance of mutation without reducing effectiveness is to employ parallel processing, where mutants and tests are executed in parallel. This approach reduces the total time needed to accomplish the mutation analysis. This paper proposes three strategies for parallel execution of mutants on multicore machines using the Parallel Computing Toolbox (PCT) with the Matlab Distributed Computing Server. It aims to demonstrate that the computationally intensive software testing schemes, such as mutation, can be facilitated by using parallel processing. The experiments were carried out on eight different Simulink models. The results represented the efficiency of the proposed approaches in terms of execution time during the testing process.
EN
Search based techniques have been widely applied in the domain of software testing. This Systematic Literature Review aims to present the research carried out in the field of search based approaches applied particularly to mutation testing. During the course of literature review, renowned databases were searched for the relevant publications in the field to include relevant studies up to the year 2014. Few studies for the year 2015-16, gathered by performing snowball search, have also been included. For reviewing the literature in the field, 43 studies were evaluated, out of which 18 studies were thoroughly studied and analysed. The result of this SLR shows that search based techniques were applied to mutation testing primarily for two purposes, either for mutant optimisation or for test case optimisation. The future directions of this SLR suggests the application of search based techniques for other issues related to mutation testing, like, solution to equivalents mutants, generation of non-trivial mutants, multi-objective test data generation and non-functional testing.
EN
Background. Common approaches to software verification include static testing techniques, such as code reading, and dynamic testing techniques, such as black-box and white-box testing. Objective. With the aim of gaining a better understanding of software testing techniques, a controlled experiment replication and the synthesis of previous experiments which examine the efficiency of code reading, black-box and white-box testing techniques were conducted. Method. The replication reported here is composed of four experiments in which instrumented programs were used. Participants randomly applied one of the techniques to one of the instrumented programs. The outcomes were synthesized with seven experiments using the method of network meta-analysis (NMA). Results. No significant differences in the efficiency of the techniques were observed. However, it was discovered the instrumented programs had a significant effect on the efficiency. The NMA results suggest that the black-box and white-box techniques behave alike; and the efficiency of code reading seems to be sensitive to other factors. Conclusion. Taking into account these findings, the Authors suggest that prior to carrying out software verification activities, software engineers should have a clear understanding of the software product to be verified; they can apply either black-box or white-box testing techniques as they yield similar defect detection rates.
12
Content available remote Process of finding defects in software testing
EN
Software testing is the most significant stage of the Software Development Life Cycle. Projects underneath testing goes through different stages such as test analysis, test planning, test case, test case review process, test execution process, requirement traceability matrix (RTM), defect tracking (bug logging and tracking), test execution report and closure. In terms of software, defects means whenever expected results not meet actual results. Generally defect is known as a bug. It talks about the complete life cycle of a bug right from the stage it was found, fixed, re-test, and close.This paper basically deals with entire process of bug life cycle and how to avoid the bug. To avoid the bug, Test Engineer should prepare the bug report template which consists of various steps.
13
Content available remote Mutation Churn Model
EN
Mutation testing is considered as one of the most effective quality improvement technique by assessing the strength of the actual test suite. If no test is able to kill a given mutant, this means that the tests are not strong enough and we need to write additional one that will be able to kill this mutant. However, mutation testing is very time consuming. In this paper we investigate if it is possible to reduce the scope of the mutation analysis by running it only on the new or changed part of the code. Using data from the real open-source projects we analyze if there is a relation between mutation scope reduction and effectiveness of the mutation analysis.
EN
The aim of the article is to present a mathematical definition of the object model, that is known in computer science as TreeList and to show application of this model for design evolutionary algorithm, that purpose is to generate structures based on this object. The first chapter introduces the reader to the problem of presenting data using the TreeList object. The second chapter describes the problem of testing data structures based on TreeList. The third one shows a mathematical model of the object TreeList and the parameters, used in determining the utility of structures created through this model and in evolutionary strategy, that generates these structures for testing purposes. The last chapter provides a brief summary and plans for future research related to the algorithm presented in the article.
PL
Celem artykułu jest prezentacja definicji matematycznego modelu obiektu, który w informatyce znany jest jako TreeList oraz wykorzystanie tego modelu do zaprojektowania algorytmu ewolucyjnego, którego zadaniem jest generowanie struktur opartych na obiekcie TreeList. Pierwszy rozdział wprowadza czytelnika w problem, jakim jest prezentacja danych za pomocą wspomnianego obiektu TreeList. Drugi rozdział opisuje problem testowania struktur danych opartych o TreeList. Rozdział trzeci natomiast prezentuje matematyczny model obiektu TreeList oraz miary, które można wykorzystać w celu określenia użyteczności struktur utworzonych za pomocą wspomnianych obiektów oraz w strategii ewolucyjnej, która generuje te struktury dla potrzeby ich testowania. Ostatni rozdział zawiera krótkie podsumowanie oraz plany przyszłych badań związanych z zaprezentowanym w artykule algorytmem.
EN
It discusses the two formal models of software testing by the concept of a black box. In the first model assumes a non-zero probability of not removing the detected error. In the second model assumes also non-zero probability to introduce additional of error, so-called secondary error. In both cases the systems of ChapmanKolmogorov differential equations was formulated. Solving them was obtained formulas to enable an estimate the expected number of errors remaining in the software after end of testing and estimation of the expected duration of the process to complete software testing them.
16
Content available remote Modelling the software testing process
EN
An approach to formal modelling the program testing process is proposed in the paper. Considerations are based on some program reliability-growth model that is constructed for assumed scheme of the program testing process. In this model the program under the testing is characterized by means of so-called characteristic matrix and the program testing process is determined by means of so-called testing strategy. The formula for determining the mean value of the predicted number of errors encountered during the program testing is obtained. This formula can be used if the characteristic matrix and the testing strategy are known. Formulae for evaluating this value when the program characteristic matrix is not known are also proposed in the paper.
PL
Przedmiotem zawartych w artykule rozważań jest modelowanie procesu testowania programu, ze szczególnym uwzględnieniem modelowania wzrostu niezawodności programu w procesie jego testowania. W rozpatrywanym modelu testowany program charakteryzowany jest za pomocą tzw. macierzy charakterystycznej programu. Na bazie skonstruowanego modelu wyprowadzona została zależność na wartość oczekiwaną liczby błędów, wykrycie których spodziewane jest w wyniku realizacji procesu testowania, realizowanego w oparciu o przyjętą strategię testowania. Otrzymana zależność może być wykorzystana w praktyce, jeżeli macierz charakterystyczna programu jest znana. Dla przypadku, gdy macierz ta nie jest znana skonstruowane zostało w artykule obustronne oszacowanie tej wartości oczekiwanej.
EN
Analysis of software reliability plays an important role in quality assurance plan realization during software development. By monitoring changes of evaluated reliability in relation to quality objectives it is possible to analyze current situation in respect to agreed requirements and initiate appropriate actions when needed to secure fulfilling of the goals. The use of software reliability growth models as the only method for reliability evaluation seems to be too much simplified approach. Such approach, based solely on fault detection history, may in some circumstances be risky and lead to significantly wrong decisions related to the software validation process. Taking possible pros and cons into account the model described in this paper is proposed to use a number of additional information concerning the software being tested and the validation process itself, to produce more accurate outcomes from the reliability analysis. The produced outcome gives an appropriate feedback for a decision makers, taking into account assumed software development process characteristic. Integral part of the presented approach is devoted to reliability characteristics of a system being tested in parallel by several independent teams.
PL
Badanie niezawodności oprogramowania stanowi istotną część realizacji planu jakościowego w procesie produkcji oprogramowania. Poprzez monitorowanie zmian wartości prognozowanej niezawodności oprogramowania w odniesieniu do założonych celów jakościowych można dokonywać analizy bieżącej sytuacji oraz w razie konieczności podejmować kroki sprzyjające realizacji założonego planu. Wykorzystanie w celu predykcji niezawodności jedynie modeli wzrostu niezawodności oprogramowania, bazujących na historii wykrywania błędów w badanym oprogramowaniu, wydaje się być podejściem zbyt uproszczonym. Podejście to w pewnych okolicznościach realizacji procesu walidacji oprogramowania może być obarczone dużym błędem i wpływać na podejmowanie błędnych decyzji przez decydenta. W związku z tym, w zaproponowanym modelu wykorzystuje się szereg dodatkowych informacji o testowanym oprogramowaniu oraz samym procesie walidacji w celu uzyskania bardziej wiarygodnych efektów analizy niezawodnościowej, będących jednocześnie odpowiednią informacją zwrotną dla decydenta z punktu widzenia założonych realiów prowadzenia projektu programistycznego. Integralną część prezentowanego podejścia stanowi aspekt wyznaczania charakterystyk niezawodnościowych systemu testowanego równolegle przez kilka niezależnych zespołów.
EN
This article provides short state-of-the-art review over the robotic mobile software testing devices. It presents all devices that are known to the author (one of which is built by the author) in comparison to each other.
EN
Random but visually nice shapes are often needed for cognitive experiments and processes. This study describes a heuristic for generating random but nice shapes. We call them placated shapes. These shapes are produced by applying the Gaussian blur to randomly generated polygons. Subsequently, the threshold is set to transform pixels to black and white from different shades of gray. This transformation produces placated shapes for easier estimation of areas. Randomly generated placated shapes are used for testing the accuracy of cognitive processes by pairwise comparisons. They can also be used in many other areas such as computer games or software testing. Such shapes could also be used for camouflaging heavy army equipment.
20
Content available A Testing Environment for Distributed Systems
EN
The article presents the basics of modern software testing theory. Testing automation and the integration of testing into code writing will be examined in detail, and concept of a testing environment for distributed systems will be introduced.
first rewind previous Strona / 2 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.