This paper presents the results of a study to determine the potential of radar imaging to detect classes of built-up areas defined in the Urban Atlas (UA) spatial database. The classes are distinguished by function and building density. In addition to the reflectance value itself, characteristics such as building density or spatial layout can improve the identification of these classes. In order to increase the classification possibilities and better exploit the potential of radar imagery, a grey-level co-occurrence matrix (GLCM) was generated to analyse the texture of built-up classes. Two types of synthetic-aperture radar (SAR) images from different sensors were used as test data: Sentinel-1 and ICEYE, which were selected for their different setup configurations and parameters. Classification was carried out using the Random Forests (RF) and Minimum Distance (MD) methods. The use of the MD classifier resulted in an overall accuracy of 64% and 51% for Sentinel-1 and ICEYE, respectively. In ICEYE, individual objects (e.g. buildings) are better recognised than classes defined by their function or density, as in UA classes. Sentinel-1 performed better than ICEYE, with its texture images better complementing the features of urban area classes. This remains a significant challenge due to the complexity of urban areas in defining and characterising urban area classes. Automatic acquisition of training fields directly from UA is problematic and it is therefore advisable to independently obtain reference data for built-up area categories.
Medicinal plants have a huge significance today as it is the root resource to treat several ailments and medical disorders that do not find a satisfactory cure using allopathy. The manual and physical identification of such plants requires experience and expertise and it can be a gradual and cumbersome task, in addition to resulting in inaccurate decisions. In an attempt to automate this decision making, a data set of leaves of 10 medicinal plant species were prepared and the Gray-level Co-occurence Matrix (GLCM) features were extracted. From our earlier implementations of the several machine learning algorithms, the k-nearest neighbor (KNN) algorithm was identified as best suited for classification using MATLAB 2019a and has been adopted here. Based on the confusion matrices for various k values, the optimum k was selected and the hardware implementation was implemented for the classifier on FPGA in this work. An accuracy of 88.3% was obtained for the classifier from the confusion chart. A custom intellectual property (IP) for the design is created and its verification is done on the ZedBoard for three classes of plants.
The field of biomedicine is still working on a solution to the challenge of diagnosing brain tumors, which is now one of the most significant challenges facing the profession. The possibility of an early diagnosis of brain cancer depends on the development of new technologies or instruments. Automated processes can be made possible thanks to the classification of different types of brain tumors by utilizing patented brain images. In addition, the proposed novel approach may be used to differentiate between different types of brain disorders and tumors, such as those that affect the brain. The input image must first undergo pre-processing before the tumor and other brain regions can be separated. Following this step, the images are separated into their respective colors and levels, and then the Gray Level Co-Occurrence and SURF extraction methods are used to determine which aspects of the photographs contain the most significant information. Through the use of genetic optimization, the recovered features are reduced in size. The cut-down features are utilized in conjunction with an advanced learning approach for the purposes of training and evaluating the tumor categorization. Alongside the conventional approach, the accuracy, inaccuracy, sensitivity, and specificity of the methodology under consideration are all assessed. The approach offers an accuracy rate greater than 90%, with an error rate of less than 2% for every kind of cancer. Last but not least, the specificity and sensitivity of each kind are higher than 90% and 50%, respectively. The usage of a genetic algorithm to support the approach is more efficient than using the other ways since the method that the genetic algorithm utilizes has greater accuracy as well as higher specificity.
This paper presents the comparison of efficacy of three selected methods of texture analysis: Grey Level Co-occurence Matrix (GLCM) based entropy, Laplace operations and granulometric analysis, in the context of the edge effect. This effect means identifying edges of objects in an image as places of high texture, regardless of the real nature of the texture of the objects. It can significantly decrease the efficacy of selected methods of texture analysis. The article provides a brief presentation of the principal cause of this effect and also, shortly, the basics of the tested methods of texture analysis. The experiments were carried out on the VHR satellite image Pleiades (2m GSD), on the selected samples (test areas) of four land use/cover classes having different texture. The separation of these test areas in different texture images was assessed using Jeffries-Matusita distance. The results prove the significance of impact of the edge effect on the selected methods of texture analysis (GLCM entropy and Laplace operations), but they also show that a granulometric analysis is generally insusceptible to this effect, and thereby it provides the best discrimination of land use/cover classes of different texture.
5
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
In this study, a new detection algorithm for yarn-dyed fabric defect based on autocorrelation function and grey level co-occurrence matrix (GLCM) is put forward. First, autocorrelation function is used to determine the pattern period of yarn-dyed fabric and according to this, the size of detection window can be obtained. Second, GLCMs are calculated with the specified parameters to characterise the original image. Third, Euclidean distances of GLCMs between being detected images and template image, which is selected from the defect-free fabric, are computed and then the threshold value is given to realise the defect detection. Experimental results show that the algorithm proposed in this study can achieve accurate detection of common defects of yarn-dyed fabric, such as the wrong weft, weft crackiness, stretched warp, oil stain and holes.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.