Digital rock physics is based on imaging, segmentation and numerical computations of rock samples. Due to challenges regarding the handling of a large 3-dimensional (3D) sample, 2D algorithms have always been attractive. However, in 2D algorithms, the efficiency of the pore structures in the third direction of the generated 3D sample is always questionable. We used four individually captured gCT-images of a given Berea sandstone with different resolutions (12.922, 9.499, 5.775, and 3.436 gm) to evaluate the super-resolution 3D images generated by multistep Super Resolution Double-U-Net (SRDUN), a 2D algorithm. Results show that unrealistic features form in the third direction due to section-wise reconstruction of 2D images. To overcome this issue, we suggest to generate three 3D samples using SRDUN in different directions and then to use one of two strategies: compute the average sample (reconstruction by averaging) or segment one-directional samples and combine them together (binary combination). We numerically compute rock physical properties (porosity, connected porosity, P- and S-wave velocity, permeability and formation factor) to evaluate these models. Results reveal that compared to one-directional samples, harmonic averaging leads to a sample with more similar properties to the original sample. On the other hand, rock physics trends can be calculated using a binary combination strategy by generating low, medium and high porosity samples. These trends are compatible with the properties obtained from one-directional and averaged samples as long as the scale difference between the input and output images of SRDUN is small enough (less than about 3 in our case). By increasing the scale difference, more dispersed results are obtained.
2
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
The diagnosis of urinary tract infections and kidney diseases using urine microscopy images has gained significant attention of medical community in recent years. These images are usually created by physicians’ own rule of thumb manually. However, this manual urine sediment analysis is usually labor-intensive and time-consuming. In addition, even when physicians carefully examine an image, an erroneous cell recognition may occur due to some optical illusions. In order to achieve cell recognition in low-resolution urine microscopy images with a higher level of accuracy, a new super resolution Faster Region-based Convolutional Neural Network (Faster R-CNN) method is proposed. It aims to increase resolution in low-resolution urine microscopy images using self-similarity based single image super resolution which was used during the pre-processing. Denoising based Wiener filter and Discrete Wavelet Transform (DWT) are used to de-noise high resolution images, respectively, to increase the level of accuracy for image recognition. Finally, for the feature extraction and classification stages, AlexNet, VGFG16 and VGG19 based Faster R-CNN models are used for the recognition and detection of multi-class cells. The model yielded accuracy rates are 98.6%, 96.4% and 96.2% respectively.
To better extract feature maps from low-resolution (LR) images and recover high-frequency information in the high-resolution (HR) images in image super-resolution (SR), we propose in this paper a new SR algorithm based on a deep convolutional neural network (CNN). The network structure is composed of the feature extraction part and the reconstruction part. The extraction network extracts the feature maps of LR images and uses the sub-pixel convolutional neural network as the up-sampling operator. Skip connection, densely connected neural networks and feature map fusion are used to extract information from hierarchical feature maps at the end of the network, which can effectively reduce the dimension of the feature maps. In the reconstruction network, we add a 3×3 convolution layer based on the original sub-pixel convolution layer, which can allow the reconstruction network to have better nonlinear mapping ability. The experiments show that the algorithm results in a significant improvement in PSNR, SSIM, and human visual effects as compared with some state-of-the-art algorithms based on deep learning.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.