PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Google Books Ngrams Recompressed and Searchable

Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
One of the research fields significantly affected by the emergence of “big data” is computational linguistics. A prominent example of a large dataset targeting this domain is the collection of Google Books Ngrams, made freely available, for several languages, in July 2009. There are two problems with Google Books Ngrams the textual format (compressed with Deflate) in which they are distributed is highly inefficient we are not aware of any tool facilitating search over those data, apart from the Google viewer, which, as a Web tool, has seriously limited use. In this paper we present a simple preprocessing scheme for Google Books Ngrams, enabling also search for an arbitrary n gram (i.e., its associated statistics) in average time below 0.2 ms. The obtained compression ratio, with Deflate (zip) left as the backend coder, is over 3 times higher than in the original distribution.
Rocznik
Strony
273--283
Opis fizyczny
Bibliogr. 20 poz.
Twórcy
  • Lodz University of Technology, Institute of Applied Computer Science, al. Politechniki 11, 90-924 Łódź, Poland
autor
  • University of Szczecin, Institute of Information Technology in Management, Mickiewicza 64, 71-101 Szczecin, Poland
Bibliografia
  • [1] Brants T., Popat A. C., Xu P., Och F. J., Dean J., Large language models in machine translation, in: Proceedings of the 2007 Joint Conference on Empirical Methods inNatural Language Processing and Computational Natural Language Learning, Prague, ACL 2007, 858-867.
  • [2] Gao J., Nguyen P., Li X., Thrasher C., Li M., Wang K., A Comparative Study of Bing Web N-gram Language Models for Web Search and Natural Language Processing, in: Workshop of the 33rd Annual International ACM SIGIR Conference on Research andDevelopment in Information Retrieval, Geneva 2010.
  • [3] Grabowski Sz., Swacha J., Compact Representation of URL Collections with Fast Access, Automatyka, 15, 3, 2011, 349-355.
  • [4] Guthrie D., Hepple M., Liu W., Efficient Minimal Perfect Hash Language Models, in: N. Calzolari, K. Choukri, B. Maegaard, J. Mariani, J. Odijk, S. Piperidis, M. Rosner, D. Tapias (eds.), Proceeding of the Seventh International Conference on LanguageResources and Evaluation, Valetta, ELRA 2010.
  • [5] Michel J.-B. B., Kui Y., Presser A., Veres A., Gray M. K., Google Books Team, Picket J. P., Hoiberg D., Clancy D., Norvig P., Orwant J., Pinker S., Nowak M. A., Lieberman Aider E., Quantitative Analysis of Culture Using Millions of Digitized Books, Science, 331, 6014, 2011, 176-182.
  • [6] Microsoft Research, Spelling Alteration for Web Search Workshop, City Center - Bellevue, WA, July 19, 2011. Materials available at http://webngram. research.microsoft.com/Spellerchallenge/Docs/Spelling_Alteration_Workshop. pdf (last checked: June 2012).
  • [7] Pauls A., Klein D., Faster and Smaller N-Gram Language Models, in: Y. Matsumoto, R. Mihalcea (eds.), Proceedings of the 49th Annual Meeting of the Association forComputational Linguistics: Human Language Technologies - Volume 1, Stroudsburg, ACL 2011, 258-267.
  • [8] Procházka V., Pollák P., Analysis of Czech Web 1T 5-Gram Corpus and Its Comparison with Czech National Corpus Data, in: P. Sojka, A. Horák, I. Kopecek, K. Pala (eds.), Proceedings of the 13th International Conference Text, Speech andDialog, Brno, Springer 2010, 181-188.
  • [9] Skibiński P., Grabowski Sz., Swacha J., Effective asymmetric XML compression, Software-Practice and Experience, 38, 10, 2008, 1027-1047.
  • [10] Talbot D., Brants T., Randomized Language Models via Perfect Hash Functions, in: Proceedings of the 46th Annual Meeting of the Association for ComputationalLinguistics: Human Language Technologies, Columbus, ACL 2008, 505-513.
  • [11] Witten I. H., Moffat A., Bell T. C., Managing Gigabytes: Compressing and IndexingDocuments and Images, Morgan Kaufmann Publishers, Los Altos, 1999.
  • [12] Ziv, J., Lempel, A., A Universal Algorithm for Sequential Data Compression, IEEETransactions on Information Theory, 23, 3, 1977, 337-343.
  • [13] http://books.google.com/ngrams (last checked: June 2012).
  • [14] http://books.google.com/ngrams/datasets (last checked: June 2012).
  • [15] http://books.google.com/ngrams/info (last checked: June 2012).
  • [16] http://iiwz.wneiz.pl/jakubs/progs/ngram_compressor.zip (last checked: June 2012).
  • [17] http://research.microsoft.com/en-us/collaboration/focus/cs/web-ngram.aspx (last checked: June 2012).
  • [18] http://www.base2ti.com (last checked: June 2012).
  • [19] http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalogId=LDC2006T13 (last checked: June 2012).
  • [20] http://www.ldc.upenn.edu/Catalog/catalogEntry.jsp?catalogId=LDC2011T07 (last checked: June 2012).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-80a53b54-1000-46c8-9313-50be6700f7fb
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.