Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 30

Liczba wyników na stronie
first rewind previous Strona / 2 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  image fusion
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 2 next fast forward last
EN
We propose a high-quality infrared and visible image fusion method based on a deep wavelet-dense network (WT-DenseNet). The WT-DenseNet includes three network layers, the hybrid feature extraction layer, fusion layer, and image reconstruction layer. The hybrid feature extraction layer is composed of a wavelet and dense network. The wavelet network decomposes the feature map of the visible and infrared images into low-frequency and high-frequency components, respectively. The dense network extracts the salient features. A fusion layer is designed to integrate low-frequency and salient features. Finally, the fusion images are outputted by an image reconstruction layer. The experimental results demonstrate that the proposed method can realize high-quality infrared and visible image fusions, and the performance of the proposed method is better than that of the six recently published fusion methods in terms of contrast and detail performance.
EN
Important information perceived by human vision comes from the low-level features of the image, which can be extracted by the Riesz transform. In this study, we propose a Riesz transform based approach to image fusion. The image to be fused is first decomposed using the Riesz transform. Then the image sequence obtained in the Riesz transform domain is subjected to the Laplacian wavelet transform based on the fractional Laplacian operators and the multi-harmonic splines. After Laplacian wavelet transform, the image representations have directional and multi-resolution characteristics. Finally, image fusion is performed, leveraging Riesz-Laplace wavelet analysis and the global coupling characteristics of pulse coupled neural network (PCNN). The proposed approach has been tested in several application scenarios, such as multi-focus imaging, medical imaging, remote sensing full-color imaging, and multi-spectral imaging. Compared with conventional methods, the proposed approach demonstrates superior performance on visual effects, contrast, clarity, and the overall efficiency.
EN
Introduction: Based on the tumor’s growth potential and aggressiveness, glioma is most often classified into low or high-grade groups. Traditionally, tissue sampling is used to determine the glioma grade. The aim of this study is to evaluate the efficiency of the Laplacian Re-decomposition (LRD) medical image fusion algorithm for glioma grading by advanced magnetic resonance imaging (MRI) images and introduce the best image combination for glioma grading. Material and methods: Sixty-one patients (17 low-grade and 44 high-grade) underwent Susceptibility-weighted image (SWI), apparent diffusion coefficient (ADC) map, and Fluid attenuated inversion recovery (FLAIR) MRI imaging. To fuse different MRI image, LRD medical image fusion algorithm was used. To evaluate the effectiveness of LRD in the classification of glioma grade, we compared the parameters of the receiver operating characteristic curve (ROC). Results: The average Relative Signal Contrast (RSC) of SWI and ADC maps in high-grade glioma are significantly lower than RSCs in low-grade glioma. No significant difference was detected between low and high-grade glioma on FLAIR images. In our study, the area under the curve (AUC) for low and high-grade glioma differentiation on SWI and ADC maps were calculated at 0.871 and 0.833, respectively. Conclusions: By fusing SWI and ADC map with LRD medical image fusion algorithm, we can increase AUC for low and high-grade glioma separation to 0.978. Our work has led us to conclude that, by fusing SWI and ADC map with LRD medical image fusion algorithm, we reach the highest diagnostic accuracy for low and high-grade glioma differentiation and we can use LRD medical fusion algorithm for glioma grading.
EN
For brain tumour treatment plans, the diagnoses and predictions made by medical doctors and radiologists are dependent on medical imaging. Obtaining clinically meaningful information from various imaging modalities such as computerized tomography (CT), positron emission tomography (PET) and magnetic resonance (MR) scans are the core methods in software and advanced screening utilized by radiologists. In this paper, a universal and complex framework for two parts of the dose control process – tumours detection and tumours area segmentation from medical images is introduced. The framework formed the implementation of methods to detect glioma tumour from CT and PET scans. Two deep learning pre-trained models: VGG19 and VGG19-BN were investigated and utilized to fuse CT and PET examinations results. Mask R-CNN (region-based convolutional neural network) was used for tumour detection – output of the model is bounding box coordinates for each object in the image – tumour. U-Net was used to perform semantic segmentation – segment malignant cells and tumour area. Transfer learning technique was used to increase the accuracy of models while having a limited collection of the dataset. Data augmentation methods were applied to generate and increase the number of training samples. The implemented framework can be utilized for other use-cases that combine object detection and area segmentation from grayscale and RGB images, especially to shape computer-aided diagnosis (CADx) and computer-aided detection (CADe) systems in the healthcare industry to facilitate and assist doctors and medical care providers.
EN
Super-resolution image reconstruction utilizes two algorithms, where one is for single-frame image reconstruction, and the other is for multi-frame image reconstruction. Singleframe image reconstruction generally takes the first degradation and is followed by reconstruction, which essentially creates a problem of insufficient characterization. Multi-frame images provide additional information for image reconstruction relative to single frame images due to the slight differences between sequential frames. However, the existing super-resolution algorithm for multi-frame images do not take advantage of this key factor, either because of loose structure and complexity, or because the individual frames are restored poorly. This paper proposes a new SR reconstruction algorithm for images using Multi-grained Cascade Forest. Multi-frame image reconstruction is processed sequentially. Firstly, the image registration algorithm uses a convolutional neural network to register low-resolution image sequences, and then the images are reconstructed after registration by the Multi-grained Cascade Forest reconstruction algorithm. Finally, the reconstructed images are fused. The optimal algorithm is selected for each step to get the most out of the details and tightly connect the internal logic of each sequential step. This novel approach proposed in this paper, in which the depth of the cascade forest is procedurally generated for recovered images, rather than being a constant. After training each layer, the recovered image is automatically evaluated, and new layers are constructed for training until an optimal restored image is obtained. Experiments show that this method improves the quality of image reconstruction while preserving the details of the image.
EN
Diabetes mellitus is a clinical syndrome caused by the interaction of genetic and environmental factors. The change of plantar pressure in diabetic patients is one of the important reasons for the occurrence of diabetic foot. The abnormal increase of plantar pressure is a predictor of the common occurrence of foot ulcers. The feature extraction of plantar pressure distribution will be beneficial to the design and manufacture of diabetic shoes that will be beneficial for early protection of diabetes mellitus patients. In this research, texture-based features of the angular second moment (ASM), moment of inertia (MI), inverse difference monument (IDM), and entropy (E) have been selected and fused by using the updown algorithm. The fused features are normalized to predict comfort plantar pressure imaging dataset using an improved fuzzy hidden Markov model (FHMM). In FHMM, type-I fuzzy set is proposed and fuzzy Baum–Welch algorithm is also applied to estimate the next features. The results are discussed, and by comparing with other back–forward algorithms and different fusion operations in FHMM. Improved HMMs with up–down fusion using type-I fuzzy definition performs high effectiveness in prediction comfort plantar pressure distribution in an image dataset with an accuracy of 82.2% and the research will be applied to the shoe-last personalized customization in the industry.
EN
Although the unimodal biometric recognition (such as face and palmprint) has higher convenience, its security is also relatively weak. The recognition accuracy is easy affected by many factors such as ambient light and recognition distance etc. To address this issue, we present a weighted multimodal biometric recognition algorithm with face and palmprint based on histogram of contourlet oriented gradient (HCOG) feature description. We employ the nonsubsampled contour transform (NSCT) to decompose the face and palmprint images, and the HOG method is adopted to extract the feature, which is named as HCOG feature. Then the dimension reduction process is applied on the HCOG feature and a novel weight value computation method is proposed to accomplish the multimodal biometric fusion recognition. Extensive experiments illustrate that our proposed weighted fusion recognition can achieve excellent recognition accuracy rates and outmatches the unimodal biometric recognition methods.
8
Content available remote Palmprint recognition based on convolutional neural network-Alexnet
EN
In the classic algorithm, palmprint recognition requires extraction of palmprint features before classification and recognition, which will affect the recognition rate. To solve this problem, this paper uses the convolutional neural network (CNN) structure Alexnet to realize palmprint recognition. First, according to the characteristics of the geometric shape of palmprint, the ROI area of palmprint was cut out. Then the ROI area after processing is taken as input layer of convolutional neural network. Next the PRelu activation function is used to train the network to select the best learning rate and super parameters. Finally, the palmprint was classified and identified. The method was applied to PolyU Multi-Spectral Palmprint Image Database and PolyU 2D+3D Palmprint Database, and the recognition rate of a single spectrum was up to 99.99%.
9
Content available remote An improved feature based image fusion technique for enhancement of liver lesions
EN
This paper describes two methods for enhancement of edge and texture of medical images. In the first method optimal kernel size of range filter suitable for enhancement of liver and lesions is deduced. The results have been compared with conventional edge detection algorithms. In the second method the feasibility of feature based pixel wise image fusion for enhancing abdominal images is investigated. Among the different algorithms developed in the medical image fusion pixel level fusion is capable of retaining the maximum relevant information with better implementation and computational efficiency. Conventional image fusion includes multi-modal fusion and multi-resolution fusion. The present work attempts to fuse together, texture enhanced and edge enhanced images of the input image in order to obtain significant enhancement in the output image. The algorithm is tested in low contrast medical images. The result shows an improvement in contrast and sharpness of output image which will provide a basis for a better visual interpretation leading to more accurate diagnosis. Qualitative and quantitative performance evaluation is done by calculating information entropy, MSE, PSNR, SSIM and Tenengrad values.
PL
W artykule dokonano waloryzacji wybranych metod scalania danych teledetekcyjnych o różnej rozdzielczości pod kątem ich przydatności do kartowania pokrycia i użytkowania terenu. Analizie poddano oryginalne dane Landsat ETM+ (30m), dane Landsat przeliczone do 5m z dodanym do zestawu kanałem IRS PAN 1D (5m) oraz dane Landsat i IRS PAN scalone czterema metodami: IHS, PCA, WMK i PL, charakteryzującymi się wyraźnie odmiennymi algorytmami integracji. Opracowanych w ten sposób sześć zestawów danych poddano klasyfikacji spektralnej metodami maksymalnego prawdopodobieństwa, drzew decyzyjnych i sieci neuronowych. Wyniki uzyskane na danych sprzed i po integracji zestawiono dodatkowo z analizami fotointerpretacyjnymi, wykonanymi równolegle do analiz klasyfikacyjnych. Testy potwierdziły przewagę metody fotointepretacyjnej nad wynikami klasyfikacji spektralnej, w zależności od zestawu danych, o 6-11% wartości dokładności całkowitej mapy pokrycia i użytkowania terenu. Scalenie danych poprawia ogólną dokładność klasyfikacji o 9% pomiędzy pracą na oryginalnym obrazie Landsat (30m) a zintegrowanym Landsat z IRS (5m), pozwalając uzyskać dokładność 64%. Dla metody fotointerpretacyjnej wzrost dokładności wynosi 6%, osiągając 71%. Wybór metody integracji jest drugorzędny - zróżnicowanie wyników w metodzie fotointerpretacyjnej wynosi 1%, dla metod klasyfikacyjnych około 5% (najlepsza - PL; najgorsza - IHS). Znaczenie dla wyników klasyfikacji ma wybór algorytmu: z wszystkich testowanych zestawów danych najlepsze wyniki uzyskano dla sieci neuronowych (64%), następnie dla drzew decyzyjnych (62%) i metody największego prawdopodobieństwa (59%).
EN
The article valorises selected methods of merging remote sensing data of different resolution in terms of their suitability for mapping land use and land cover. The original Landsat data (30m) was analyzed, Landsat data converted to 5m with IRS PAN 1D (5m) added to the set and Landsat and IRS PAN data merged with four methods: IHS (transformation into space intensity, hue, saturation), PCA (principal components analysis), WMK (Wiemker's method) and PL (laplace pyramid), characterized by distinctly different integration algorithms . Six sets of data developed in this way were subjected to spectral classification by maximum probability methods, decision trees and neural networks. The results obtained on the data from before and after the integration were additionally compiled with photointerpretation analyzes, made in parallel to the classification analyzes. The research area was the city of Kraków with adjacent suburban areas, 10x20 km in size. For the research objective being pursued, 5 reference squares 500m x 500m were prepared, ensuring diversity and representativeness for the entire analysis area. The reference data was based on a photo interpretation aerial photographs with a field pixel size of 0.75m. The tests confirmed the predominance of the photo interpretation method over the results of spectral classification, depending on the data set, by 6-11% of the accuracy value of the total land use and land cover maps. Merging data improves the overall accuracy of the 9% classification between work on the original Landsat image (30m) and the integrated Landsat with IRS (5m), allowing for an accuracy of 64%. For the photointerpretation method, the increase is 6%, reaching the accuracy of 71%. The choice of the method of integration is secondary - the variation in results in the photointerpretation method is 1%, for classification methods about 5% (best - PL, worst - IHS). The choice of the algorithm is important for classification results: of all the tested data sets, the best results were obtained for neural networks (64%), then for decision trees (62%) and the maximum probability method (59%).
EN
Side scan sonar measurement platform, affected by underwater environment and its own motion precision, inevitably has posture and motion disturbance, which greatly affects accuracy of geomorphic image formation. It is difficult to sensitively and accurately capture these underwater disturbances by relying on auxiliary navigation devices. In this paper, we propose a method to invert motion and posture information of the measurement platform by using the matching relation between the strip images. The inversion algorithm is the key link in the image mosaic frame of side scan sonar, and the acquired motion posture information can effectively improve seabed topography and plotting accuracy and stability. In this paper, we first analyze influence of platform motion and posture on side scan sonar mapping, and establish the correlation model between motion, posture information and strip image matching information. Then, based on the model, a reverse neural network is established. Based on input, output of neural network, design of and test data set, a motion posture inversion mechanism based on strip image matching information is established. Accuracy and validity of the algorithm are verified by the experimental results.
EN
The article presents an overview of different approaches and methods to evaluate spectral and spatial quality of image fusion results. This technique allows integrate the geometric detail of high-resolution panchromatic image and the spectral information of low-resolution multispectral image to produce high-resolution multispectral image. This new image can be used for more detailed analyses. However, in order to carry out a quantitative analysis, e.g. biomass estimation, it is necessary to preserve the spectral characteristics of the original multispectral image. This is among other the reason for the development of new algorithms for image fusion and methods for assessing the quality of their results.
13
Content available Image fusion for travel time tomography inversion
EN
The travel time tomography technology had achieved wide application, the hinge of tomography was inversion algorithm, the ray path tracing technology had a great impact on the inversion results. In order to improve the SNR of inversion image, comprehensive utilization of inversion results with different ray tracing can be used. We presented an imaging fusion method based on improved Wilkinson iteration method. Firstly, the shortest path method and the linear travel time interpolation were used for forward calculation; then combined the improved Wilkinson iteration method with super relaxation precondition method to reduce the condition number of matrix and accelerate iterative speed, the precise integration method was used to solve the inverse matrix more precisely in tomography inversion process; finally, use wavelet transform for image fusion, obtain the final image. Therefore, the ill- conditioned linear equations were changed into iterative normal system through two times of treatment and using images with different forward algorithms for image fusion, it reduced the influence effect of measurement error on imaging. Simulation results showed that, this method can eliminate the artifacts in images effectively, it had extensive practical significance.
14
Content available remote Skanowanie ciała w zakresie THz i MMW - krótki przegląd i badania własne
PL
W niniejszym artykule scharakteryzowano kilka dostępnych na rynku oraz będących w stanie zaawansowanego rozwoju systemów do skanowania ludzi wykorzystujacych fale terahercowe i milimetrowe. Na tym tle zaprezentowano opracowany w WAT system skanowania wraz z modułem przetwarzania i fuzji obrazów.
EN
This paper presents state of the art and emerging body scanners operating in the terahertz and millimeter waves ranges. On this background, a passive scanning system developed in WAT is briefly characterized and its image processing and image fusion modules are described.
EN
The paper presents a hybrid method for simultaneous inspection of objects in the visible spectrum and infrared vision. In order to adapt the system to the industrial machines, both tracks of vision cameras were placed in a hybrid video head. Using the developed system, diagnostic tests of selected processes were performed. Measurements were made on the industrial line for washing bottles and at a glassworks. In the first case, the aim of the research was the diagnostics of process for automated cleaning of bottles. At the glassworks, the diagnostic measurements were performed for the bottom of the furnace. The benefits of using a hybrid method of vision are presented, including primarily the increase of research efficiency, the facilitation of the interpretation of the results, and the acceleration of sequence measurements for large areas.
PL
W artykule przedstawiono hybrydową metodę wizyjną umożliwiającą jednoczesną inspekcję badanych obiektów w paśmie widzialnym i podczerwieni. W celu przystosowania systemu do warunków przemysłowych kamery obu torów wizyjnych zostały umieszczone w hybrydowej głowicy wizyjnej. Za pomocą opracowanego systemu wykonano badania diagnostyczne wybranych procesów technologicznych. Pomiary wykonano na linii do przemysłowego mycia butelek oraz w hucie szkła. W pierwszym przypadku celem badań była diagnostyka procesu automatycznego czyszczenia butelek. W hucie wykonano natomiast pomiary diagnostyczne dna pieca szklarskiego. Zaprezentowano korzyści wynikające z zastosowania hybrydowej metody wizyjnej, w tym przede wszystkim zwiększenie efektywności badań, ułatwienie interpretacji wyników oraz przyspieszenie sekwencji pomiarów dla dużych powierzchni.
EN
This paper describes the basic idea of operation and assumptions of the Trinocular Vision System (TVS) designed to support the underwater exploration with the use of the autonomous vehicle. The paper characterizes the optical proper-ties of the inland water environment, and the process of the image formation in that environment. The paper presents the aim of the image fusion and also the design process of a multimodal vision system, i.e. the selection of its components confirmed by prior research in the context of underwater operation.
PL
Artykuł przedstawia podstawowe założenia odnośnie działania trójokularowej głowicy wizyjnej (TVS) zaprojektowanej i wykonanej do rejestracji obrazów w wodach śródlądowych przy wykorzystaniu autonomicznego pojazdu podwodnego. Opisane zostało środowisko operacyjne takiej głowicy, jakim są wody śródlądowe oraz proces powstawania informacji wizyjnej, w tym właśnie środowisku. Następnie wyjaśnione zostało pojęcie i cel fuzji obrazowej oraz proces projektowania głowicy, tj. dobór elementów optycznych i mechanicznych poparty wcześniejszymi badaniami w zadanych pasmach promieniowania.
EN
The paper concerns the research on diagnostics of a welding process. The estimation of the process state is being performed by means of the analysis of infrared images and images recorded within visible range of electromagnetic radiation. To carry out the image analysis it is necessary to cut out the area called region of interests (ROI). In case of welding which is the dynamical process this operation appeared complicated. The most important point of the operation is image segmentation. The author proposed and tested an algorithm of the definition of the ROI as well as verified numerous image segmentation methods.
PL
W artykule przedstawiono badanie nakierowane na diagnozowanie procesu spawania. Estymacja stanu procesu była przeprowadzana poprzez zastosowanie metod analizy obrazów termowizyjnych, jak również obrazów zarejestrowanych w widzialnym paśmie promieniowania elektromagnetycznego. Przeprowadzenie analizy obrazów wymagało wyboru i wycięcia obszaru, zwanego regionem zainteresowania (ang. region of interests, ROI). W przypadku spawanie będącego procesem bardzo dynamicznym operacja ta jest dość skomplikowana, a jej najważniejszym etapem jest segmentacja obrazu. Autorzy zaproponowali i zweryfikowali algorytm definiowania ROI, przy zastosowaniu kilku różnych metod segmentacji obrazów.
PL
W niniejszym opracowaniu zaprezentowano wyniki prac związanych z kompleksową oceną jakości ogólnodostępnych algorytmów integracji obrazów wielospektralnych i panchromatycznych w odniesieniu do obrazów rejestrowanych przez system WorldView-2. Jest to jedyny system o bardzo wysokiej rozdzielczości przestrzennej umożliwiający rejestrację obrazów w 8 zakresach promieniowania. Zakresy te dają większe możliwości zastosowania niż ma to miejsce w przypadku danych 4-kanałowych, np. w szczegółowych analizach upraw i zasiewów, czy badaniach wód. Ponieważ na wyniki analiz ilościowych ma wpływ zastosowana metoda integracji danych panchromatycznych PAN i wielospektralnych MS cennym jest dokonanie oceny jakości algorytmów zaimplementowanych w różnych pakietach oprogramowania komercyjnego (ERDAS Imagine, PCI Geomatica, ENVI). Ocena jakości wyników integracji obrazów PAN i MS wykonana została pod względem jakości spektralnej, jak i jakości przestrzennej. Do określenia jakości spektralnej i przestrzennej obrazów przetworzonych wykorzystano m.in. analizę korelacji, RMSE, wskaźniki jakości Q, nQ%, ERGAS i DPP. Najlepsze wyniki - zarówno pod względem jakości spektralnej, jak i przestrzennej - uzyskano za pomocą trzech metod: Zhanga (PCI Geomatica), Grama-Schmidta (ENVI) i Ehlersa (ERDAS Imagine).
EN
This paper presents the results of complex evaluation of the quality of image fusion algorithms, which are implemented in different software. All analysis were made for WorldView-2 satellite image. It is the only a very high resolution satellite system, which acquires image in 8 spectral bands. These spectral bands give the larger possibility of applying application than is in the case of image with 4-bands data, such as the assessment of carbon stocks in forests and inland water investigation. However, the results of quantitative analyzes depend on the applied image fusion algorithm, it is important to assess the quality of the resultsed obtained using different algorithms implemented in various commercial software packages (ERDAS Imagine, PCI Geomatica, ENVI). Evaluation of quality of the image fusion results has been made in terms of spectral and spatial quality. To determine the spectral and spatial quality of the processed images, used in such the correlation coefficient, RMSE, quality index Q, new quality index nQ%, ERGAS, Deviation Index (DI) and Deviation Per Pixel (DPP) were used. The best results, both in terms of the spectral and spatial quality, were given by give three methods: Zhang algorithm (PCI Geomatica), Gram-Schmidt transformation (ENVI) and Ehlersalgorithm (ERDAS Imagine).
PL
Dopasowanie obrazów jest jednym z etapów fuzji obrazów. W artykule zaprezentowano blokowy algorytm dopasowania obrazów multimodalnych, bazujący na korelacji fazowej. Algorytm wykorzystuje podział obrazu na prostokątne bloki w celu lepszego dopasowania kilku planów obserwowanej sceny. Rozwiązanie zostało opracowane dla systemu monitorowania złożonego z kamer IR oraz TV, przy wynikających stąd założeniach upraszczających.
EN
Sophisticated video surveillance systems use many cameras for watching over the same area. Image fusion allows combining two or more images into a single image containing the most relevant information. One of the most important phases of image fusion is image registration. In this article, we present a block based image registration algorithm for multi-modal images, using the example of TV and thermal (IR) camera images acquired by a monitoring head. For this type of head, the proposed algorithm searches only for translation parameters to align both images; scale and rotation parameters are assumed to be constant, and distortion is neglected. The rough translation parameters are calculated by the classic phase correlation method for image registration. Then, the same method is used to vertically align corresponding rectangular blocks of both images. Inaccurate alignment parameters are detected by the analysis of these parameters in some antecedent time probes and adequately corrected. Data integration by filling gaps between image blocks constitutes the last phase of the presented algorithm. This algorithm delivers good registration effects for images with several near and distant planes of images and preserves a low computation complexity enabling real-time hardware implementation.
PL
W artykule przedstawiono fragment wyników badań dotyczących wyboru metody dopasowania obrazów dla potrzeb ich dalszej fuzji. Poszukiwano metody pozwalającej na efektywne dopasowanie obrazów wizyjnych i termowizyjnych przedstawiających scenę o strukturze zmieniającej się w czasie. Ocenę jakości dopasowania przeprowadzono z zastosowaniem wybranych metryk jakościowych. Porównywano ze sobą różne metody. Wyniki oceny wskazują, że algorytmem dopasowania prowadzącym do uzyskania obrazu po fuzji o najlepszej jakości jest algorytm wykorzystujący mapy gradientów.
EN
In the paper the part of studies connected with search of an optimal image registration method suitable for further image fusion purposes is presented. The search was made for an infrared and visible light acquired image. Thermograms were taken by cameras working in mid (outdoor scene) and long infrared (welding arc). Degradation between images was connected mainly with translation between camera optical axes. Three registration methods were taken into consideration. They were based on cross correlation, maximization of mutual information as well as intensity and edge orientation information. Each method was used to register images from two sets. The aligned images were next aggregated with the multiscale discrete wavelet method. The registration quality was measured with objective quality metrics such as the root mean square error (RMSE), the peak signal to noise ratio (PSNR) and the universal image quality index (Q). The used metrics allow the comparison between the benchmark images registered manually and the considered images. The analysis of the obtained results leads to the statement that among the tested methods the one using simultaneously the area and feature information generates the best registration parameters. On the other hand, the practical usage of image fusion is strongly connected with amount of the time consumed for registration. Thus, the preregistration and assumption that only transitional differences between images are present influence the capability of each method applicability.
first rewind previous Strona / 2 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.