Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 8

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  reconstruction algorithm
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Background: The aim of the study is to optimise the value of B parameter (β), which is used in the Q.Clear reconstruction in the imaging of neuroendocrine tumours. The study is divided into two parts: analysis of phantom data aiming at selection of the appropriate β for small changes, and then assessment of its impact on the quality of patients' images. The literature data on the optimal β value are inconclusive. Furthermore, the suggested values are not the result of the semi-quantitative assessment of Standard Uptake Volume (SUV) or the proper verification based on, for example, phantom studies using the known activity. Results: The obtained results show that β increase raises the image uniformity in the Q.Clear reconstruction algorithm. Also, referring to the scientific reports, one can see that the signal to noise ratio in the image increases. The effect of the β change on the SUV mean and Contrast Recovery Coefficient (CRC) value is greatest for the smallest objects. The decrease of this parameter is also much higher with lower values of activity (a lower counts statistic in the PET system). Conclusions: An increase of β has an adverse effect on the quality of a semi-quantitative assessment of SUV -as the parameter increases, the SUV and CRC values decrease. In the visual assessment, a satisfactory image quality is present with β = 450. Based on the analysis of SUV and CRC, an appropriate range of β values was selected as 350-450. At the selected range, a retrospective analysis of the clinical images of neuroendocrine tumours will be performed in the future and the impact of the change on the semi-quantitative analysis of pathological changes will be verified.
EN
The paper presented here describes a new practical approach to the reconstruction problem applied to 3D spiral x-ray tomography. The concept we propose is based on a continuous-to-continuous data model, and the reconstruction problem is formulated as a shift invariant system. This original reconstruction method is formulated taking into consideration the statistical properties of signals obtained by the 3D geometry of a CT scanner. It belongs to the class of nutating reconstruction methods and is based on the advanced single slice rebinning (ASSR) methodology. The concept shown here significantly improves the quality of the images obtained after reconstruction and decreases the complexity of the reconstruction problem in comparison with other approaches. Computer simulations have been performed, which prove that the reconstruction algorithm described here does indeed significantly outperforms conventional analytical methods in the quality of the images obtained.
EN
The field of mechanical manufacturing is becoming more and more demanding on machining accuracy. It is essential to monitor and compensate the deformation of structural parts of a heavy-duty machine tool. The deformation of the base of a heavy-duty machine tool is an important factor that affects machining accuracy. The base is statically indeterminate and complex in load. It is difficult to reconstruct deformation by traditional methods. A reconstruction algorithm for determining bending deformation of the base of a heavy-duty machine tool using inverse Finite Element Method (iFEM) is presented. The base is equivalent to a multi-span beam which is divided into beam elements with support points as nodes. The deflection polynomial order of each element is analysed. According to the boundary conditions, the deformation compatibility conditions and the strain data measured by Fiber Bragg Grating (FBG), the deflection polynomial coefficients of a beam element are determined. Using the coordinate transformation, the deflection equation of the base is obtained. Both numerical verification and experiment were carried out. The deflection obtained by the reconstruction algorithm using iFEM and the actual deflection measured by laser displacement sensors were compared. The accuracy of the reconstruction algorithm is verified.
EN
This paper presents the software for comprehensive processing and visualization of 2D and 3D electrical tomography data. The system name as TomoKIS Studio has been developed in the frame of DENIDIA international research project and has been improved in the frame of Polish Ministry of Science and Higher Education Project no 4664/B/T02/2010/38. This software is worldwide unique because it simultaneously integrates the process of tomographic data acquisition, numerical FEM modeling and tomographic images reconstruction. The software can be adapted to specific industrial applications, particularly to monitoring and diagnosis of two-phase flows. The software architecture is composed of independent modules. Their combination offers calibration, configuration and full-duplex communication with any tomographic acquisition system with known and open communication protocol. The other major features are: online data acquisition and processing, online and offline 2D/3D images linear and nonlinear reconstruction and visualization as well as raw data and tomograms processing. Another important ability is 2D/3D ECT sensor construction using FEM modeling. The presented software is supported with the multi-core GPU technology and parallel computing using Nvidia CUDA technology.
PL
W artykule autorzy przedstawiają środowisko komputerowe do kompleksowego przetwarzania i wizualizacji tomograficznych danych pomiarowych. Oprogramowanie TomoKIS Studio powstało w Instytucie Informatyki Stosowanej PŁ w ramach projektu DENIDIA i zostało rozwinięte w ramach projektu MNiSW nr 4664/B/T02/2010/38. Zbudowane oprogramowanie jest unikalne w skali światowej, gdyż integruje w sobie proces pozyskiwania danych pomiarowych, modelowanie numeryczne oraz proces konstruowania obrazów tomograficznych, z możliwością adaptacji dla różnych aplikacji przemysłowych, w szczególności dla potrzeb monitorowania i diagnostyki przepływów dwufazowych gaz-ciecz. Architektura aplikacji oparta jest na zestawie niezależnych modułów, które pozwalają na w pełni dwukierunkową komunikacją, konfigurację oraz kalibrację dowolnego urządzenia tomografii elektrycznej z otwartym protokołem pomiarowym, akwizycję i przetwarzanie danych pomiarowych on-line, liniową oraz nieliniową rekonstrukcję obrazów 2D i 3D w czasie rzeczywistym, a także wizualizację surowych danych pomiarowych i tomogramów. Istotnym elementem systemu jest moduł numerycznego modelowania czujników pojemnościowych wykorzystujący metodę elementów skończonych, oparty na autorskich algorytmach generowania siatek MES komputerowych modeli czujników pojemnościowych. Architektura prezentowanego systemu została zaprojektowana przy użyciu obliczeń równoległych na procesorach graficznych, z wykorzystaniem technologii Nvidia CUDA.
5
Content available remote Probabilistic Reconstruction of hv-convex Polyominoes from Noisy Projection Data
EN
In this paper the well-known problem of reconstructing hv-convex polyominoes is considered from a set of noisy data. Differently from the usual approach of Binary Tomography, this leads to a probabilistic evaluation in the reconstruction algorithm, where different pixels assume different probabilities to be part of the reconstructed image. An iterative algorithm is then applied, which, starting from a random choice, leads to an explicit reconstruction matching the noisy data.
PL
W pracy przedstawiono wyniki analizy algorytmów rekonstrukcji konduktancji prostokątnych siatek rezystorów na podstawie pomiarów brzegowych. Zbadano stabilność numeryczną algorytmu zaproponowanego przez Curtisa i Morrowa. Przetestowano działanie algorytmu w przypadku istnienia błędów pomiarowych. Wykazano, że błędy pomiarów znacząco wpływają na poprawność rekonstrukcji nawet dla siatek o niewielkich rozmiarach oraz że algorytm jest niestabilny numerycznie dla siatek o większych rozmiarach. Zaproponowano szereg modyfikacji algorytmu rekonstrukcji oraz porównano efekty działania wersji z usprawnieniami oraz oryginalnej.
EN
In this work the problem of reconstruction of conductances in rectangular resistor grids from boundary measurements is investigated. The algorithm proposed by Curtis and Morrow is studied in terms of numerical stability. Algorithm’s performance in the presence of measurement errors is tested. It is shown that measurement errors can deteriorate the performance of the algorithm even for small grid sizes and that the algorithm is numerically unstable for larger grids. Several methods for improving the algorithm are proposed. The performance of the modified versions are tested.
PL
W pracy przedstawiono wyniki analizy algorytmów rekonstrukcji konduktancji prostokątnych siatek rezystorów na podstawie pomiarów brzegowych. Opracowano i zaimplementowano algorytmy rekonstrukcji bazujące na metodach metaheurystcznych (symulowane wyżarzanie, algorytmy genetyczne) oraz optymalizacyjnych. Zaproponowane algorytmy porównano pod względem stabilności numerycznej oraz poprawności uzyskiwanych wyników. Przedstawiono ograniczenia istniejących algorytmów oraz zaproponowano usprawnienia.
EN
The problem of reconstruction of conductances in rectangular resistive grids from boundary measurements is studied. Several reconstruction algorithms based on metaheuristics (simulated annealing, genetic algorithms) and optimization methods are compared in terms of numerical stability and accuracy of the results. Limitations of the algorithms are discussed and several improvements are proposed.
EN
With the increasing complexity and scale of industrial processes their visualization is becoming increasingly important. Especially popular are non-invasive methods, which do not interfere directly with the process. One of them is the 3D Electrical Capacitance Tomography. It possesses however a serious flaw - in order to obtain a fast and accurate visualization requires application of computationally intensive algorithms. Especially non-linear reconstruction using Finite Element Method is a multistage, complex numerical task, requiring many linear algebra transformations on very large data sets. Such process, using traditional CPUs can take, depending on the used meshes, up to several hours. Consequently it is necessary to develop new solutions utilizing GPGPU (General Purpose Computations on Graphics Processing Units) techniques to accelerate the reconstruction algorithm. With the developed hybrid parallel computing architecture, based on sparse matrices, it is possible to perform tomographic calculations much faster using GPU and CPU simultaneously, both with Nvidia CUDA and OpenCL.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.