PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

A Machine Learning Model for Improving Building Detection in Informal Areas: A Case Study of Greater Cairo

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Building detection in Ashwa’iyyat is a fundamental yet challenging problem, mainly because it requires the correct recovery of building footprints from images with high-object density and scene complexity. A classification model was proposed to integrate spectral, height and textural features. It was developed for the automatic detection of the rectangular, irregular structure and quite small size buildings or buildings which are close to each other but not adjoined. It is intended to improve the precision with which buildings are classified using scikit learn Python libraries and QGIS. WorldView-2 and Spot-5 imagery were combined using three image fusion techniques. The Grey-Level Co-occurrence Matrix was applied to determine which attributes are important in detecting and extracting buildings. The Normalized Digital Surface Model was also generated with 0.5-m resolution. The results demonstrated that when textural features of colour images were introduced as classifier input, the overall accuracy was improved in most cases. The results show that the proposed model was more accurate and efficient than the state-of-the-art methods and can be used effectively to extract the boundaries of small size buildings. The use of a classifier ensample is recommended for the extraction of buildings.
Rocznik
Strony
39--59
Opis fizyczny
Bibliogr. 53 poz., fot., rys., tab.
Twórcy
  • National Authority for Remote Sensing and Space Sciences, Cairo, Egypt
  • National Authority for Remote Sensing and Space Sciences, Cairo, Egypt
Bibliografia
  • 1. Khalifa M.A.: Redefining slums in Egypt. Unplanned versus unsafe areas. Habitat International, vol. 35, 2011, pp. 40–49. https://doi.org/10.1016/j.habitatint.2010.03.004.
  • 2. United Nations: Sustainable Development Goals. https://www.un.org/sustainabledevelopment/cities/ [access: 7.06.2021].
  • 3. San D.K., Turker M.: Building extraction from high resolution satellite images using Hough transform. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Science, vol. XXXVIII, part 8, 2010, pp. 10–63.
  • 4. Huang X., Zhang L.: Morphological Building/Shadow Index for Building Extraction from High-Resolution Imagery over Urban Areas. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 5, no. 1, 2012, pp. 161–172. https://doi.org/10.1109/JSTARS.2011.2168195.
  • 5. Chaudhuri D., Kushwaha N.K., Samal A., Agarwal R.C.: Automatic Building Detection from High-Resolution Satellite Images Based on Morphology and Internal Gray Variance. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 9, 2016, pp. 1767–1779. https://doi.org/10.1109/JSTARS.2015.2425655.
  • 6. Uzar M.: Automatic Building Extraction with Multi-sensor Data Using Rule-based Classification. European Journal of Remote Sensing, vol. 47, 2014, pp. 1–18. https://doi.org/10.5721/EuJRS20144701.
  • 7.Mishra A., Pandey A., Baghel A.S.: Building Detection and Extraction Techniques: A Review. [in:] 2016 3rd International Conference on Computing for Sustainable Global Development (INDIACom), 2016, pp. 3816–3821.
  • 8. Wang J., Qin Q., Chen L., Ye X., Qin X., Wang J., Chen C.: Automatic Building Extraction from Very High-Resolution Satellite Imagery Using Line Segment Detector. [in:] 2013 IEEE International Geoscience and Remote Sensing Symposium – IGARSS, 2013, pp. 212–215. https://doi.org/10.1109/IGARSS.2013.6721129.
  • 9. Wang M., Yuan S., Pan J.: Building Detection in High-Resolution Satellite Urban Image Using Segmentation, Corner Detection Combined with Adaptive Windowed Hough Transform. [in:] 2013 IEEE International Geoscience and Remote Sensing Symposium – IGARSS, 2013, pp. 508–511. https://doi.org/10.1109/IGARSS.2013.6721204.
  • 10. Liasis G., Stavrou S.: Building Extraction in Satellite Images Using Active Contours and Colour Features. International Journal of Remote Sensing, vol. 37, 2016, pp. 1127–1153. https://doi.org/10.1080/01431161.2016.1148283.
  • 11. Sun G., Huang H., Weng Q., Zhang A., Jia X., Ren J., Sun L., Chen X.: Combinational shadow index for building shadow extraction in urban areas from Sentinel-2A MSI imagery. International Journal of Applied Earth Observation and Geoinformation, vol. 78, 2019, pp. 53–65. https://doi.org/10.1016/j.jag.2019.01.012.
  • 12. Huang J., Zhang X., Xin Q., Sun Y., Zhang P.: Automatic building extraction from high-resolution aerial images and LiDAR data using gated residual refinement network. ISPRS Journal of Photogrammetry and Remote Sensing, vol. 151, 2019, pp. 91–105. https://doi.org/10.1016/j.isprsjprs.2019.02.019.
  • 13. Wang X., Li P.: Extraction of urban building damage using spectral, height and corner information from VHR satellite images and airborne LiDAR data. ISPRS Journal of Photogrammetry and Remote Sensing, vol. 159, 2020, pp. 322–336. https://doi.org/10.1016/j.isprsjprs.2019.11.028.
  • 14. Lu C., Yang X., Wang Z., Li Z.: Using multi-level fusion of local features for land-use scene classification with high spatial resolution images in urban coastal zones. International Journal of Applied Earth Observation and Geoinformation, vol. 70, 2018, pp. 1–12. https://doi.org/10.1016/j.jag.2018.03.010.
  • 15. Liu J., Li T., Xie P., Du S., Teng F., Yang X.: Urban big data fusion based on deep learning: An overview. Information Fusion, vol. 53, 2020, pp. 23–133. https://doi.org/10.1016/j.inffus.2019.06.016.
  • 16. Sohn G., Dowman I.: Extraction of buildings from high-resolution satellite data. [in:] Baltsavias E., Gruen A., Van Gool L. (eds.), Automated Extraction of Man-Made Objects from Aerial and Space Images (III), A.A. Balkema Publishers, Lisse 2001, pp. 345–355.
  • 17. Sheykhmousa M., Mahdianpari M., Ghamisi M., Homayoun S.: Support Vector Machine vs. Random Forest for Remote Sensing Image Classification: A Meta-Analysis and Systematic Review. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 13, 2020, pp. 6308–6325. https://doi.org/10.1109/JSTARS.2020.3026724.
  • 18. Belgiu M., Drǎguţ L.: Comparing supervised and unsupervised multiresolution segmentation approaches for extracting buildings from very high resolution imagery. ISPRS Journal of Photogrammetry and Remote Sensing, vol. 96, 2014, pp. 67–75. https://doi.org/10.1016/j.isprsjprs.2014.07.002.
  • 19. Deng C., Wu C.: The use of single-date MODIS imagery for estimating largescale urban impervious surface fraction with spectral mixture analysis and machine learning techniques. ISPRS Journal of Photogrammetry and Remote Sensing, vol. 86, 2013, pp. 100–110. https://doi.org/10.1016/j.isprsjprs.2013.09.010.
  • 20. Chehata N., Guo L., Mallet C.: Airborne lidar feature selection for urban classification using random forests. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 39, 2009, pp. 207–212.
  • 21. Niemeyer J., Rottensteiner F., Soergel U.: Contextual classification of lidardata and building object detection in urban areas. ISPRS Journal of Photogrammetry and Remote Sensing, vol. 87, 2014, pp. 152–165. https://doi.org/10.1016/j.isprsjprs.2013.11.001.
  • 22. Corcoran J., Knight J., Gallant A.: Influence of multi-source and multitemporal remotely sensed and ancillary data on the accuracy of random forest classification of wetlands in northern Minnesota. Remote Sensing, vol. 5, 2013, pp. 3212–3238. https://doi.org/10.3390/rs5073212.
  • 23. Gislason P.O., Benediktsson J.A., Sveinsson J.R.: Random forests for land cover classification. Pattern Recognition Letters, vol. 27(4), 2006, pp. 294–300. https://doi.org/10.1016/j.patrec.2005.08.011.
  • 24. Cherkassky V., Ma Y.: Practical selection of SVM parameters and noise estimation for SVM regression. Neural Networks, vol. 17, no. 1, 2004, pp. 113–126. https://doi.org/10.1016/S0893-6080(03)00169-2.
  • 25. Mountrakis G., Im J., Ogole C.: Support vector machines in remote sensing: A review. ISPRS Journal of Photogrammetry and Remote Sensing, vol. 66, no. 3, pp. 247–259, 2011. https://doi.org/10.1016/j.isprsjprs.2010.11.001.
  • 26. Vågen T.-G.: Remote sensing of complex land use change trajectories a case study from the highlands of Madagascar. Agriculture, Ecosystems and Environment, vol. 115, 2006, pp. 219–228. http://doi.org/10.1016/j.agee.2006.01.007.
  • 27. Saini A., Pratibha: A Review on Various Techniques of Image Fusion for Quality Improvement of Images. International Journals of Advanced Research in Computer Science and Software Engineering, vol. 8(1), 2018.
  • 28. Wang X., Bai S., Li Z., Sui Y., Tao J.: The PAN and MS image fusion algorithm based on adaptive guided filtering and gradient information regulation. Information Sciences, vol. 545, 2021, pp. 381–402. https://doi.org/10.1016/j.ins.2020.09.006.
  • 29. Wang X., Wang Y., Zhou C., Yin L., Feng X.: Urban forest monitoring based on multiple features at the single tree scale by UAV. Urban Forestry & Urban Greening, vol. 58, 2021, 126958. https://doi.org/10.1016/j.ufug.2020.126958.
  • 30. Zhang Y, Sidibé D., Morel O., Mériaudeau F.: Deep multimodal fusion for semantic image segmentation: A survey. Image and Vision Computing, vol. 105, 2021, 104042. https://doi.org/10.1016/j.imavis.2020.104042.
  • 31. Liu X., Jiao L., Li L., Tang X., Guo Y.: Deep multi-level fusion network for multisource image pixel-wise classification. Knowledge-Based Systems, vol. 221, 2021, 106921. https://doi.org/10.1016/j.knosys.2021.106921.
  • 32. Rasti B., Ghamisi P.: Remote sensing image classification using subspace sensor fusion. Information Fusion, vol. 64, 2021, pp. 121–130. https://doi.org/10.1016/j.inffus.2020.07.002.
  • 33. ERDAS Imagine: Geospatial Modeling & Visualization. https://gmv.cast.uark.edu/photogrammetry/ [access: 2.01.2021].
  • 34. Cao H., Tao P., Li H., Shi J.: Bundle adjustment of satellite images based on an equivalent geometric sensor model with digital elevation model. ISPRS Journal of Photogrammetry and Remote Sensing, vol. 156, 2019, pp. 169–183. https://doi.org/10.1016/j.isprsjprs.2019.08.011.
  • 35. Fonseca L., Namikawa L., Castejon E., Carvalho L., Pinho C., Pagamisse A.: Image Fusion for Remote Sensing Applications. [in:] Zheng Y. (ed.), Image Fusion and Its Applications, IntechOpen Limited, London 2011. https://doi.org/10.5772/22899.
  • 36. Ehlers M., Klonus S., Åstrand P.J., Rosso P.: Multi-sensor image fusion for pansharpening in remote sensing. International Journal of Image and Data Fusion, vol. 1(1), 2010, pp. 25–45. https://doi.org/10.1080/19479830903561985.
  • 37. Ma J., Yu J., Zhang J., Zhang Y., Bi Q., Wang G., Yang J., Long Y.: Processing Practice of Remote Sensing Image Based on Spatial Modeler. [in:] 2012 2nd International Conference on Remote Sensing, Environment and Transportation Engineering, pp. 1–5. https://doi.org/10.1109/RSETE.2012.6260666.
  • 38. Riyahi R., Kleinn C., Fuchs H.: Comparison of different image fusion techniques for individual tree crown identification using quickbird images. ISPRS Archives, vol. XXXVIII-1-4-7/W5, 2009, pp. 1–4.
  • 39. Møller-Jensen L.: Classification of urban land cover based on expert systems, object models and texture. Computers, Environment and Urban Systems, vol. 21, no. 3/4, 1997, pp. 291–302. https://doi.org/10.1016/S0198-9715(97)01004-1.
  • 40. Haralick R.M., Shanmuga K., Dinstein I.: Textural features for image classification. IEEE Transactions on Systems, Man, and Cybernetics, vol. SMC-3(6), 1973, pp. 610–621. https://doi.org/10.1109/TSMC.1973.4309314.
  • 41. Liu Y., Chen X., Wang Z., Jane Z.W., Ward R.K., Wang X.: Deep learning for pixel-level image fusion: Recent advances and future prospects. Information Fusion, vol. 42, 2018, pp. 158–173. https://doi.org/10.1016/j.inffus.2017.10.007.
  • 42. Pedregosa F., Varoquaux G., Gramfort A., Michel V., Thirion B., Grisel O., Blondel M., Prettenhofer P., Weiss R., Dubourg V., Vanderplas J., Passos A., Cournapeau D., Brucher M., Perrot M., Duchesnay E.: Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research, vol. 12, 2011, pp. 2825–2830.
  • 43. Salah M., Trinder J.C., Shaker A., Hamed M., Elsagheer A.: Integrating multiple classifiers with fuzzy majority voting for improved land cover classification. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XXXVIII, part 3A, 2010, pp. 7–12.
  • 44. Tricht K.V., Gobin A., Gilliams S., Piccard I.: Synergistic Use of Radar Sentinel-1 and Optical Sentinel-2 Imagery for Crop Mapping: A Case Study for Belgium. Remote Sensing, vol. 10(10), 2018, 1642. https://doi.org/10.3390/rs10101642.
  • 45. Elshehaby A.R., Taha L.G.: A new expert system module for building detectionin urban areas using spectral information and LIDAR data. Applied Geomatics, vol. 1(4), pp. 97–110, 2009. https://doi.org/10.1007/s12518-009-0013-1.
  • 46. Maxwell A.E., Warner T.A., Guillén L.A.: Accuracy Assessment in Convolutional Neural Network-Based Deep Learning Remote Sensing Studies – Part 1: Literature Review remote sensing. Remote Sensing, vol. 13(13), 2021, 2450. https://doi.org/10.3390/rs13132450.
  • 47. Zhang X., Han L., Han L., Zhu L.: How Well Do Deep Learning-Based Methods for Land Cover Classification and Object Detection Perform on High Resolution Remote Sensing Imagery? Remote Sensing, vol. 12(3), 2020, 417. https://doi.org/10.3390/rs12030417.
  • 48. Mohamed A.E.: Comparative Study of Four Supervised Machine Learning Techniques for Classification. International Journal of Applied Science and Technology, vol. 7, no. 2, 2017, pp. 5–18.
  • 49. Ming D., Zhou T., Wang M., Tan T.: Land cover classification using random forest with genetic algorithm-based parameter optimization. Journal of Applied Remote Sensing, vol. 10, no. 3, 2016, 035021. https://doi.org/10.1117/1.JRS.10.035021.
  • 50. Jhonnerie R., Siregar V.P., Nababan B., Prasetyo L.B., Wouthuyzen S.: Random forest classification for mangrove land cover mapping using Landsat 5 TM and ALOS PALSAR imageries. Procedia Environmental Sciences, vol. 24, 2015, pp. 215–221. https://doi.org/10.1016/j.proenv.2015.03.028.
  • 51. Rao K.V.R., Kumar P.R.: Land Cover Classification Using Sentinel-1 SAR Data. International Journal for Research in Applied Science and Engineering Technology, vol. 5, no. 12, 2017, pp. 1054–1060.
  • 52. Belgiu M., Drǎguţ L.: Random forest in remote sensing: A review of applications and future directions. ISPRS Journal of Photogrammetry and Remote Sensing, vol. 114, 2016, pp. 24–31. https://doi.org/10.1016/j.isprsjprs.2016.01.011.
  • 53. Delgado F., Cernadas E., Barro S., Amorim D.: Do we need hundreds of classifiers to solve real world classification problems? Journal of Machine Learning Research, vol. 15, no. 1, 2014, pp. 3133–3181.
Uwagi
PL
Opracowanie rekordu ze środków MEiN, umowa nr SONP/SP/546092/2022 w ramach programu „Społeczna odpowiedzialność nauki” - moduł: Popularyzacja nauki i promocja sportu (2022-2023)
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-fe2625aa-a121-4f3d-bf73-bbdfd03dc704
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.