PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Encapsulation of image metadata for ease of retrieval and mobility

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Increasing proliferation of images due to multimedia capabilities of hand-held devices has resulted in loss of source information resulting from inherent mobility. These images are cumbersome to search out once stored away from their original source because they drop their descriptive data. This work, developed a model to encapsulate descriptive metadata into the Exif section of image header for effective retrieval and mobility. The resulting metadata used for retrieval purposes was mobile, searchable and non-obstructive.
Rocznik
Strony
62--73
Opis fizyczny
Bibliogr. 34 poz., fig.
Twórcy
autor
  • University of Ibadan, Faculty of Science, Department of Computer Science, Oyo State, Ibadan, Nigeria
  • University of Ibadan, Faculty of Science, Department of Computer Science, Oyo State, Ibadan, Nigeria
Bibliografia
  • [1] 16 mobile market statistics you should know in 2016. (2016, August 22). In Afilias Technologies Ltd. Retrieved from Device Atlas: https://deviceatlas.com/blog/16-mobile-market-statistics-you-should-know-2016
  • [2] Ames, M., & Naaman, M. (2007). Why We Tag: Motivations for Annotation in Mobile and Online Media. CHI 2007, Tags, Tagging & Notetaking (pp. 971-980). California: ACM. doi:10.1145/1240624.1240772
  • [3] Chaffey, D. (2016). Global social media research summary 2016. Retrieved August 22, 2016, from Smart Insights: http://www.smartinsights.com/social-media-marketing/social-media-strategy/ new-global-social-media-research/
  • [4] Duygulu, P., Barnard, K., Freitas, N., & Forsyth, D. (2002). Object Recognition as Machine Translation: Learning a Lexicon for a Fixed Image Vocabulary. The 7th European Conference on Computer Vision (pp. 97–112). Copenhagen.
  • [5] Extensible Metadata Platform (XMP). (2014, January 8). In Adobe Systems Incorporated. Retrieved from Adobe Systems Incorporated Web site: http://www.adobe.com/products/xmp.html
  • [6] Feng, Y., & Lapata, M. (2008). Automatic Image Annotation Using Auxiliary Text Information. Association for Computational Linguistics -08 (pp. 272–280). Columbus: Association for Computational Linguistics.
  • [7] Gozali, J. P., Kan, M.-Y., & Sundaram, H. (2012). How do people organize their photos in each event and how does it affect storytelling, searching and interpretation tasks? Proceedings of the 12th ACM/IEEE-CS joint conference on Digital Libraries (pp. 315–324). Washington, DC: ACM New York. doi:10.1145/2232817.2232875
  • [8] Hanbury, A. (2008). A survey of methods for image annotation. Journal of Visual Languages & Computing, 19(5), 617–627. doi:10.1016/j.jvlc.2008.01.002
  • [9] Internet World Stats. Usage and population Statistics. (2016, August 22). In Internet World Stats. Retrieved from Internet World Stats: http://www.internetworldstats.com/stats1.htm
  • [10] IPTC Photo Metadata Standard. (2016, January 22). In International Press Telecommunications Council. Retrieved from International Press Telecommunications Council Website: https://iptc.org/standards/photo-metadata/iptc-standard
  • [11] Ivasic-Kos, M., Pobar, M., & Ribaric, S. (2016). Two-tier image annotation model based on a multi-label classifier and fuzzy-knowledge representation scheme. Pattern Recognition, 52, 287–305. doi:10.1016/j.patcog.2015.10.017
  • [12] Jaimes, A. (2006). Human Factors in Automatic Image Retrieval System Design and Evaluation. Proceedings of SPIE Vol. #6061, Internet Imaging VII. San Jose, CA, USA. doi:10.1117/12.660255
  • [13] Japan Electronics and Information Technology Industries Association. (2002). Exchangeable image file format for digital still cameras: Exif Version 2.3. Japan: Japan Electronics and Information Technology Industries Association.
  • [14] Jeon, J., Lavrenko, V., & Manmatha, R. (2003). Automatic Image Annotation and Retrieval using Cross-Media Relevance Models. SIGIR’03. Toronto: ACM. doi:10.1145/860435.860459
  • [15] Kuric, E., & Bielikovan, M. (2015). ANNOR: Efficient image annotation based on combining local and global features. Computers & Graphics, 47, 1–15. doi:10.1016/j.cag.2014.09.035
  • [16] Kustanowitz, J., & Shneiderman, B. (2005). Motivating Annotation for Personal Digital Photo Libraries: Lowering Barriers While Raising Incentives. Tech. Report HCIL–2004–18. University of Marlyand.
  • [17] Lavrenko, V., Manmatha, R., & Jeon, J. (2003). A model for learning the semantics of pictures. The 16th Conference on Advances in Neural Information Processing Systems. Vancouver.
  • [18] Makadia, A., Pavlovic, V., & Kumar, S. (2008). A New Baseline for Image Annotation. In T. P. Forsyth D. (Ed.), ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part III (pp. 316–329). Berlin, Heidelberg: Springer. doi:0.1.1.145.9205
  • [19] Monthly Subscriber Data. (2017, August 22). In The Nigerian Communications Commission. Retrieved from NCC Subscriber Statistics: http://ncc.gov.ng/index.php?option=com_ content&view=article&id=125:subscriber-statistics&catid=65:industry-information&Itemid=73
  • [20] Mori, Y., Takahashi, H., & Oka, R. (1999). Image-to-word transformation based on dividing and vector quantizing images with words. Proceedings of the 1st International Workshop on Multimedia Intelligent Storage and Retrieval Management. Orlando. doi:10.1.1.31.1704
  • [21] National Information Standards Organization. (2004). Understanding Metadata. Bethesda, USA: NISO Press.
  • [22] National Information Standards Organization. (2015). RLG Technical Metadata for Images Workshop Report. Retrieved from National Information Standards Organization: http://www.niso.org/imagerpt.html
  • [23] Numbers, Facts and Trends Shaping Your World. (2015, July 20). In Pew Research Centre. Retrieved from http://www.pewresearch.org
  • [24] Pew Global. (2016, August 22). In Pew Research Center. Retrieved from Pew Research Center, Global Attitudes & Trends: http://www.pewglobal.org/2015/04/15/cell-phones-in-africa-communication-lifeline
  • [25] Rodden, K., & Wood, K. R. (2003). How Do People Manage Their Digital Photographs? SIGCHI Conference on Human Factors in Computing Systems (pp. 409–416). Florida: ACM New York. doi:10.1145/642611.642682
  • [26] Smartphone Users Worldwide Will Total 1.75 Billion in 2014. (2015, July 20). In eMarketer. Retrieved from http://www.emarketer.com/Article/Smartphone-Users-Worldwide-Will-Total-175-Billion-2014/1010536
  • [27] Smith, A. (2015). US smartphone use in 2015. Retrieved July 20, 2015, from Pew Research Centre: http://www.pewinternet.org/2015/04/01/us-smartphone-use-in-2015/
  • [28] Strydom, T. (2015). Facebook rakes in users in Nigeria and Kenya, eyes rest of Africa. Retrieved August 22, 2016, from Reuters: http://www.reuters.com/article/us-facebook-africa-idUSKCN0RA17L20150910
  • [29] Wang, X.-J., Zhang, L., Jing, F., & Ma, W.–Y. (2006). AnnoSearch: Image Auto-Annotation by Search. The 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE. doi:10.1109/CVPR.2006.58
  • [30] Wenyin, L., Dumais, S., Sun, Y., Zhang, H., Czerwinski, M., & Field, B. (2001). Semi-Automatic Image Annotation. INTERACT '01: IFIP TC13 International Conference on Human-Computer Interaction (pp. 326–333). IOS Press.
  • [31] Weston, J., Bengio, S., & Usunier, N. (2010). Large scale image annotation: learning to rank with joint word-image embeddings. Machine Learning, 81, 21–35. doi:10.1007/s10994-010-5198-3
  • [32] Woods, N. C. (2017). Low-level Multimedia Recognition and Classification for Intelligence and Forensic Analysis. Unpublished Thesis.
  • [33] World Population Review. (2017, October 3). In World Population Review. Retrieved from http://worldpopulationreview.com/countries/nigeria-population
  • [34] Zhang, D., Islam, M. M., & Lu, G. (2012). A review on automatic image annotation techniques. Pattern Recognition, 45(1), 346–362. doi:10.1016/j.patcog.2011.05.013
Uwagi
Opracowanie rekordu w ramach umowy 509/P-DUN/2018 ze środków MNiSW przeznaczonych na działalność upowszechniającą naukę (2019).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-ae0e9a6a-8419-42cf-959c-576fd8524a84
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.