PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

A cognitively plausible model for grammar induction

Autorzy
Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
This paper aims to bring theoretical linguistics and cognition-general theories of learning into closer contact. I argue that linguists’ notions of rich Universal Grammars (UGs) are well-founded, but that cognition-general learning approaches are viable as well and that the two can and should co-exist and support each other. Specifically, I use the observation that any theory of UG provides a learning criterion – the total memory space used to store a grammar and its encoding of the input – that supports learning according to the principle of Minimum Description-Length. This mapping from UGs to learners maintains a minimal ontological commitment: the learner for a particular UG uses only what is already required to account for linguistic competence in adults. I suggest that such learners should be our null hypothesis regarding the child’s learning mechanism, and that furthermore, the mapping from theories of UG to learners provides a framework for comparing theories of UG.
Rocznik
Strony
213--248
Opis fizyczny
Bibliogr. 140 poz., rys., tab.
Twórcy
autor
  • Tel Aviv University, Israel
Bibliografia
  • [1] Klaus Abels and Ad Neeleman (2010), Nihilism masquerading as progress, Lingua, 120 (12): 2657-2660, ISSN 0024-3841.
  • [2] Dana Angluin (1980), Inductive inference of formal languages from positive data, Information and Control, 45: 117-135.
  • [3] Dana Angluin (1982), Inference of Reversible Languages, Journal of the Association for Computing Machinery, 29 (3): 741-765.
  • [4] Dana Angluin (1988), Identifying Languages from Stochastic Examples, Technical Report 614, Yale University.
  • [5] Richard N. Aslin, Jenny R. Saffran, and Elissa L. Newport (1998), Computation of conditional probability statistics by 8-month old infants, Psychological Science, 9: 321-324.
  • [6] Carl L. Baker (1979), Syntactic theory and the projection problem, Linguistic Inquiry, 10 (4): 533-581.
  • [7] Eleanor Batchelder (2002), Bootstrapping the Lexicon: A Computational Model of Infant Speech Segmentation, Cognition, 83: 167-206.
  • [8] Michael Becker, Nihan Ketrez, and Andrew Nevins (2011), The Surfeit of the Stimulus: Analytic Biases Filter Lexical Statistics in Turkish Laryngeal Alternations, Language, 87 (1): 84-125.
  • [9] Robert C. Berwick (1982), Locality Principles and the Acquisition of Syntactic Knowledge, Ph.D. thesis, MIT, Cambridge, MA.
  • [10] Robert C. Berwick, Paul Pietroski, Beracah Yankama, and Noam Chomsky (2011), Poverty of the Stimulus Revisited, Cognitive Science, 35 (7): 1207-1242, ISSN 1551-6709.
  • [11] Paul Boersma and Bruce Hayes (2001), Empirical Tests of the Gradual Learning Algorithm, Linguistic Inquiry, 32: 45-86.
  • [12] Luca Bonatti, Marcela Peña, Marina Nespor, and Jacques Mehler (2005), Linguistic Constraints on Statistical Computations, Psychological Science, 16 (6): 451-459.
  • [13] Martin D. S. Braine (1971), On Two Types of Models of the Internalization of Grammars, in D. J. Slobin, editor, The Ontogenesis of Grammar, pp. 153-186, Academic Press.
  • [14] Michael Brent (1999), An Efficient, Probabilistically Sound Algorithm for Segmentation and Word Discovery, Computational Linguistics, 34 (1-3): 71-105.
  • [15] Michael Brent and T. Cartwright (1996), Distributional Regularity and Phonotactic Constraints are useful for Segmentation, Cognition, 61: 93-125.
  • [16] Joan Bresnan and Ronald M. Kaplan (1982), Grammars as Mental Representations of Language, in The Mental Representation of Grammatical Relations, MIT Press.
  • [17] Roger Brown and Camille Hanlon (1970), Derivational Complexity and the Order of Acquisition of Child Speech, in J. R. Hayes, editor, Cognition and the Development of Language, pp. 11-53, Wiley, New York.
  • [18] Gregory J. Chaitin (1966), On the Length of Programs for Computing Finite Binary Sequences, Journal of the ACM, 13: 547-569.
  • [19] Nancy Chih-Lin Chang (2008), Constructing grammar: A computational model of the emergence of early constructions, Ph.D. thesis, University of California, Berkeley, CA.
  • [20] Moses Charikar, Eric Lehman, Ding Liu, Rina Panigrahy, Manoj Prabhakaran, Amit Sahai, and Abhi Shelat (2005), The smallest grammar problem, Information Theory, IEEE Transactions on, 51 (7): 2554-2576.
  • [21] Nick Chater and Paul Vitányi (2007), ‘Ideal Learning’ of Natural Language: Positive Results about Learning from Positive Evidence, Journal of Mathematical Psychology, 51: 135-163.
  • [22] Stanley Chen (1996), Building Probabilistic Models for Natural Language, Ph.D. thesis, Harvard University, Cambridge, MA.
  • [23] Noam Chomsky (1965), Aspects of the Theory of Syntax, MIT Press, Cambridge, MA.
  • [24] Noam Chomsky (1981), Lectures on Government and Binding, Foris, Dordrecht.
  • [25] Noam Chomsky and Morris Halle (1968), The Sound Pattern of English, Harper and Row Publishers, New York.
  • [26] Morten Christiansen, Joseph Allen, and Mark Seidenberg (1998), Learning to Segment Speech using Multiple Cues: A Connectionist Model, Language and Cognitive Processes, 13 (2/3): 221-268.
  • [27] Alexander Clark (2001), Unsupervised Language Acquisition: Theory and Practice, Ph.D. thesis, University of Sussex.
  • [28] Alexander Clark and Rémi Eyraud (2007), Polynomial identification in the limit of context-free substitutable languages, Journal of Machine Learning Research, 8: 1725-1745.
  • [29] Alexander Clark and Shalom Lappin (2011), Linguistic Nativism and the Poverty of the Stimulus, Wiley-Blackwell.
  • [30] Robin Clark and Ian Roberts (1993), A computational model of language learnability and language change, Linguistic Inquiry, 24 (2): 299-346.
  • [31] Stephen Crain, Drew Khlentzos, and Rosalind Thornton (2010), Universal Grammar versus language diversity, Lingua, 120 (12): 2668-2672, ISSN 0024-3841.
  • [32] Stephen Crain and Paul Pietroski (2002), Why Language Acquisition is a Snap, The Linguistic Review, 19: 163-183.
  • [33] Carl de Marcken (1996), Unsupervised Language Acquisition, Ph.D. thesis, MIT, Cambridge, MA.
  • [34] François Dell (1981), On the learnability of optional phonological rules, Linguistic Inquiry, 12 (1): 31-37.
  • [35] Łukasz Dębowski (2011), On the Vocabulary of Grammar-Based Codes and the Logical Consistency of Texts, Information Theory, IEEE Transactions on, 57 (7): 4589-4599, ISSN 0018-9448, doi: 10.1109/TIT.2011.2145170.
  • [36] Mike Dowman (2007), Minimum Description Length as a Solution to the Problem of Generalization in Syntactic Theory, ms., University of Tokyo, Under review.
  • [37] Fred C. Dyer and Jeffrey A. Dickinson (1994), Development of sun compensation by honeybees: How partially experienced bees estimate the sun’s course, Proceedings of the National Academy of Sciences, 91 (10): 4471-4474.
  • [38] Ansgar Endress, Ghislaine Dehaene-Lambertz, and Jacques Mehler (2007), Perceptual Constraints and the Learnability of Simple Grammars, Cognition, 105 (3): 577-614.
  • [39] Ansgar Endress, Marina Nespor, and Jacques Mehler (2009), Perceptual and Memory Constraints on Language Acquisition, Trends in Cognitive Sciences, 13 (8): 348-353.
  • [40] Ansgar D. Endress and Luca L. Bonatti (2013), Single vs. Multiple mechanism models of artificial grammar learning, under review.
  • [41] Ansgar D. Endress and Jacques Mehler (2009), Primitive computations in speech processing, The Quarterly Journal of Experimental Psychology, 62 (11): 2187-2209.
  • [42] Ansgar D Endress and Jacques Mehler (2010), Perceptual constraints in phonotactic learning, Journal of experimental psychology. Human perception and performance, 36 (1): 235-250.
  • [43] Nicholas Evans and Stephen Levinson (2009), The Myth of Language Universals: Language Diversity and its Importance for Cognitive Science, Behavioral and Brain Sciences, 32: 429-492.
  • [44] Olga Feher, Haibin Wang, Sigal Saar, Partha P. Mitra, and Ofer Tchernichovski (2009), De novo establishment of wild-type song culture in the zebra finch, Nature, 459 (7246): 564-568.
  • [45] Jacob Feldman (2000), Minimization of Boolean complexity in human concept learning, Nature, 407 (6804): 630-633.
  • [46] Jacob Feldman (2006), An algebra of human concept learning, Journal of Mathematical Psychology, 50 (4): 339-368, ISSN 0022-2496.
  • [47] W. Tecumseh Fitch and Marc D. Hauser (2004), Computational constraints on syntactic processing in a nonhuman primate, Science, 303 (5656): 377-380.
  • [48] Stephani Foraker, Terry Regier, Naveen Khetarpal, Amy Perfors, and Joshua Tenenbaum (2009), Indirect Evidence and the Poverty of the Stimulus: The Case of Anaphoric One, Cognitive Science, 33 (2): 287-300, ISSN 1551-6709.
  • [49] John Garcia, Walter Hankins, and Kenneth Rusiniak (1974), Behavioral Regulation of the Milieu Interne in Man and Rat, Science, 185 (4154): 824-831.
  • [50] Edward Gibson and Kenneth Wexler (1994), Triggers, Linguistic Inquiry, 25 (3): 407-454.
  • [51] E. Mark Gold (1967), Language Identification in the Limit, Information and Control, 10: 447-474.
  • [52] John Goldsmith (2001), Unsupervised Learning of the Morphology of a Natural Language, Computational Linguistics, 27 (2): 153-198.
  • [53] Noah D. Goodman, Joshua B. Tenenbaum, Jacob Feldman, and Thomas L. Griffiths (2008), A Rational Analysis of Rule-Based Concept Learning, Cognitive Science, 32 (1): 108-154.
  • [54] Thomas Griffiths and Joshua Tenenbaum (2006), Optimal Predictions in Everyday Cognition, Psychological Science, 17 (9): 767-773.
  • [55] Peter Grünwald (1996), A Minimum Description Length Approach to Grammar Inference, in G. S. S. Wermter and E. Riloff, editors, Connectionist, and Symbolic Approaches to Learning for Natural Language Processing, Springer Lecture Notes in Artificial Intelligence, pp. 203-216, Springer.
  • [56] Martin Hackl, Jorie Koster-Hale, and Jason Varvoutis (2012), Quantification and ACD: Evidence from Real-Time Sentence Processing, Journal of Semantics, 29 (2): 145-206.
  • [57] Daniel Harbour (2011), Mythomania? Methods and morals from ‘The Myth of Language Universals’, Lingua, 121 (12): 1820-1830, ISSN 0024-3841.
  • [58] Zellig S. Harris (1955), From Phoneme to Morpheme, Language, 31 (2): 190-222.
  • [59] Jeffrey Heinz (2007), The Inductive Learning of Phonotactic Patterns, Ph.D. thesis, University of California, Los Angeles.
  • [60] Jeffrey Heinz (2010), String Extension Learning, in Proceedings of the Annual Meeting of the Association for Computational Linguistics, pp. 897-906.
  • [61] Jeffrey Heinz and William Idsardi (2013), What Complexity Differences Reveal About Domains in Language, Topics in cognitive science, 5 (1): 111-131.
  • [62] Laurence Horn (1972), On the Semantic Properties of the Logical Operators in English, Ph.D. thesis, UCLA.
  • [63] Laurence Horn (2011), Histoire d’*O: Lexical Pragmatics and the Geometry of Opposition, in Jean-Yves Béziau and Gilbert Payette, editors, The Square of Opposition: A General Framework for Cognition, pp. 383-416, Peter Lang.
  • [64] James Horning (1969), A Study of Grammatical Inference, Ph.D. thesis, Stanford.
  • [65] Anne S. Hsu and Nick Chater (2010), The Logical Problem of Language Acquisition: A Probabilistic Perspective, Cognitive Science, 34 (6): 972-1016, ISSN 1551-6709.
  • [66] Anne S. Hsu, Nick Chater, and Paul M. B. Vitányi (2011), The probabilistic analaysis of language acquisition: Theoretical, computational, and experimental analysis, Cognition, 120 (3): 380-390, ISSN 0010-0277.
  • [67] Tim Hunter and Jeffrey Lidz (2013), Conservativity and Learnability of Determiners, Journal of Semantics, 30 (3): 315-334.
  • [68] Elizabeth K. Johnson and Peter W. Jusczyk (2001), Word Segmentation by 8-Month Olds: When Speech Cues count more than Statistics, Journal of Memory and Language, 44: 548-567.
  • [69] Shyam Kapur (1991), Computational learning of languages, Ph.D. thesis, Cornell University, Ithaca, NY.
  • [70] Roni Katzir and Raj Singh (2013), Constraints on the Lexicalization of Logical Operators, Linguistics and Philosophy, 36 (1): 1-29.
  • [71] John C. Kieffer and En-hui Yang (2000), Grammar-based codes: a new class of universal lossless source codes, Information Theory, IEEE Transactions on, 46 (3): 737-754, ISSN 0018-9448, doi: 10.1109/18.841160.
  • [72] Simon Kirby (2000), Syntax without natural selection: How compositionality emerges from vocabulary in a population of learners, The evolutionary emergencje of language: Social function and the origins of linguistic form, pp. 303-323.
  • [73] Simon Kirby (2002), Learning, bottlenecks and the evolution of recursive syntax, Linguistic evolution through language acquisition: Formal and computational models, pp. 173-203.
  • [74] Simon Kirby, Kenny Smith, and Henry Brighton (2004), From UG to Universals., Studies in Language, 28 (3): 587-607.
  • [75] Scott Kirkpatrick, C. Daniel Gelatt, and Mario P. Vecchi (1983), Optimization by Simulated Annealing, Science, 220 (4598): 671-680.
  • [76] Andrei Nikolaevic Kolmogorov (1965), Three Approaches to the Quantitative Definition of Information, Problems of Information Transmission (Problemy Peredachi Informatsii), 1: 1-7, republished as Kolmogorov (1968).
  • [77] Andrei Nikolaevic Kolmogorov (1968), Three Approaches to the Quantitative Definition of Information, International Journal of Computer Mathematics, 2: 157-168.
  • [78] Takeshi Koshiba, Erkki Mäkinen, and Yuji Takada (1997), Learning deterministic even linear languages from positive examples, Theoretical Computer Science, 185 (1): 63-79, ISSN 0304-3975.
  • [79] Kenneth J Kurtz, Kimery R Levering, Roger D Stanton, Joshua Romero, and Steven N Morris (2013), Human learning of elemental category structures: Revising the classic result of Shepard, Hovland, and Jenkins (1961), Journal of Experimental Psychology: Learning, Memory, and Cognition, 39 (2): 552-572.
  • [80] Julie Anne Legate and Charles Yang (2002), Empirical Re-assessment of Stimulus Poverty Arguments, The Linguistic Review, 19 (151-162).
  • [81] Abraham Lempel and Jacob Ziv (1976), On the Complexity of Finite Sequences, IEEE Transactions on Information Theory, 22 (1): 75-81, ISSN 0018-9448.
  • [82] Stephen C. Levinson and Nicholas Evans (2010), Time for a sea-change in linguistics: Response to comments on ‘The Myth of Language Universals’, Lingua, 120 (12): 2733-2758, ISSN 0024-3841.
  • [83] Ming Li and Paul Vitányi (1997), An Introduction to Kolmogorov Complexity and its Applications, Springer Verlag, Berlin, 2nd edition.
  • [84] Jeffrey Lidz, Sandra Waxman, and Jennifer Freedman (2003), What infants know about syntax but couldn’t have learned: Experimental evidence for syntactic structure at 18 months, Cognition, 89: B65-B73.
  • [85] Giorgio Magri (2013), The Complexity of Learning in Optimality Theory and Its Implications for the Acquisition of Phonotactics, Linguistic Inquiry, 44 (3): 433-468.
  • [86] Gary F. Marcus (1993), Negative Evidence in Language Acquisition, Cognition, 46: 53-85.
  • [87] Gary F. Marcus (2000), Pabiku and Ga Ti Ga: Two Mechanisms Infants Use to Learn about the World, Current Directions in Psychological Science, 9: 145-147.
  • [88] Lisa Matthewson (2012), On How (Not) to Uncover Cross-Linguistic Variation, in Proceedings of NELS 42.
  • [89] Sven Mattys, Peter W. Jusczyk, Paul Luce, and James L. Morgan (1999), Phonotactic and Prosodic Effects on Word Segmentation in Infants, Cognitive Psychology, 38: 465-494.
  • [90] George Miller and Noam Chomsky (1963), Finitary Models of Language Users, in R. Duncan Luce, Robert R. Bush, and Eugene Galanter, editors, Handbook of Mathematical Psychology, volume 2, pp. 419-491, Wiley, New York, NY.
  • [91] Elliott Moreton (2008), Analytic Bias and Phonological Typology, Phonology, 25: 83-127.
  • [92] Elliott Moreton and Joe Pater (2012a), Structure and Substance in Artificial-phonology Learning, Part I: Structure, Language and Linguistics Compass, 6 (11): 686-701, ISSN 1749-818X.
  • [93] Elliott Moreton and Joe Pater (2012b), Structure and Substance in Artificial-Phonology Learning, Part II: Substance, Language and Linguistics Compass, 6 (11): 702-718, ISSN 1749-818X.
  • [94] Elliott Moreton, Joe Pater, and Katya Pertsova (2014), Phonological concept learning, ms., Under review.
  • [95] Craig Nevill-Manning and Ian Witten (1997), Compression and Explanation using Hierarchical Grammars, The Computer Journal, 40 (2 and 3): 103-116.
  • [96] Partha Niyogi (2006), The Computational Nature of Language and Learning, MIT Press.
  • [97] Partha Niyogi and Robert C. Berwick (1996), A Language Learning Model for Finite Parameter Spaces, Cognition, 61: 161-193.
  • [98] Partha Niyogi and Robert C. Berwick (1997), Evolutionary consequences of language learning, Linguistics and Philosophy, 20 (6): 697-719.
  • [99] Partha Niyogi and Robert C. Berwick (2009), The proper treatment of language acquisition and change in a population setting, Proceedings of the National Academy of Sciences, 106: 10124-10129.
  • [100] Luca Onnis, Matthew Roberts, and Nick Chater (2002), Simplicity: A Cure for Overgeneralization in Language Acquisition?, in W. D. Gray and C. D. Shunn, editors, Proceedings of the 24th Annual Conference of the Cognitive Society, London.
  • [101] Miles Osborne and Ted Briscoe (1997), Learning Stochastic Categorial Grammars, in Proceedings of CoNLL, pp. 80-87.
  • [102] Daniel N. Osherson, Michael Stob, and Scott Weinstein (1984), Learning Theory and Natural Language, Cognition, 17: 1-28.
  • [103] Daniel N. Osherson, Michael Stob, and Scott Weinstein (1986), Systems that learn, MIT Press, Cambridge, Massachusetts.
  • [104] Marcela Peña, Luca Bonatti, Marina Nespor, and Jacques Mehler (2002), Signal-Driven Computations in Speech Processing, Science, 298: 604-607.
  • [105] Amy Perfors, Joshua Tenenbaum, and Terry Regier (2011), The Learnability of Abstract Syntactic Principles, Cognition, 118: 306-338.
  • [106] Pierre Perruchet and Arnaud Rey (2005), Does the Mastery of Center-Embedded Linguistic Structures Distinguish Humans from Nonhuman Primates?, Psychonomic Bulletin and Review, 12 (2): 307-313.
  • [107] Steven T. Piantadosi and Edward Gibson (2013), Quantitative Standards for Absolute Linguistic Universals, Cognitive Science, pp. n/a-n/a, ISSN 1551-6709, doi: 10.1111/cogs.12088.
  • [108] Leonard Pitt (1989), Probabilistic Inductive Inference, Journal of the ACM, 36 (2): 383-433.
  • [109] Alan Prince and Paul Smolensky (1993), Optimality Theory: Constraint Interaction in Generative Grammar, Technical report, Rutgers University, Center for Cognitive Science.
  • [110] Ezer Rasin and Roni Katzir (2013), On evaluation metrics in Optimality Theory, ms., MIT and TAU (submitted).
  • [111] Eric Reuland and Martin Everaert (2010), Reaction to: The Myth of Language Universals and cognitive science”-Evans and Levinson’s cabinet of curiosities: Should we pay the fee?, Lingua, 120 (12): 2713-2716, ISSN 0024-3841.
  • [112] Jorma Rissanen (1978), Modeling by Shortest Data Description, Automatica, 14: 465-471.
  • [113] Jorma Rissanen and Eric Sven Ristad (1994), Language Acquisition in the MDL Framework, in Language computations: DIMACS Workshop on Human Language, March 20-22, 1992, p. 149, Amer Mathematical Society.
  • [114] John R. Ross (1967), Constraints on Variables in Syntax, Ph.D. thesis, MIT, Cambridge, MA.
  • [115] Jenny R. Saffran (2003), Statistical Language Learning: Mechanisms and Constraints, Current Directions in Psychological Science, 12 (4): 110-114.
  • [116] Jenny R. Saffran, Elissa L. Newport, and Richard N. Aslin (1996), Statistical learning by 8-month old infants, Science, 274: 1926-1928.
  • [117] Wendy Sandler, Irit Meir, Carol Padden, and Mark Aronoff (2005), The emergence of grammar: Systematic structure in a new language, Proceedings of the National Academy of Sciences of the United States of America, 102 (7): 2661-2665.
  • [118] Uli Sauerland and Jonathan Bobaljik (2013), Syncretism Distribution Modeling: Accidental Homophony as a Random Event, in Proceedings of GLOW in Asia IX 2012.
  • [119] Ann Senghas, Sotaro Kita, and Asli Özyürek (2004), Children Creating Core Properties of Language: Evidence from an Emerging Sign Language in Nicaragua, Science, 305 (5691): 1779-1782.
  • [120] Roger N Shepard, Carl I Hovland, and Herbert M Jenkins (1961), Learning and memorization of classifications, Psychological Monographs: General and Applied, 75 (13): 1-42.
  • [121] Kenny Smith, Simon Kirby, and Henry Brighton (2003), Iterated learning: A framework for the emergence of language, Artificial Life, 9 (4): 371-386.
  • [122] Kirk H. Smith (1966), Grammatical Intrusions in the Recall of Structured Letter Pairs: Mediated Transfer or Position Learning?, Journal of Experimental Psychology, 72 (4): 580-588.
  • [123] David M. Sobel, Joshua B. Tenenbaum, and Alison Gopnik (2004), Children’s causal inferences from indirect evidence: Backwards blocking and Bayesian reasoning in preschoolers, Cognitive science, 28 (3): 303-333.
  • [124] Ray J. Solomonoff (1964), A formal theory of inductive inference, parts I and II, Information and Control, 7 (1 & 2): 1-22, 224-254.
  • [125] Ray J. Solomonoff (1978), Complexity-Based Induction Systems: Comparisons and Convergence Theorems, IEEE Transactions on Information Theory, 24 (4): 422-432.
  • [126] Ray J. Solomonoff (2008), Algorithmic Probability: Theory and Applications, in Frank Emmert-Streib and Matthias Dehmer, editors, Information Theory and Statistical Learning, pp. 1-23, Springer.
  • [127] Mark Steedman (1989), Grammar, Interpretation, and Processing from the Lexicon, in William Marslen-Wilson, editor, Lexical Representation and Process, pp. 463-504, MIT Press.
  • [128] Mark Steedman and Jason Baldridge (2011), Combinatory Categorial Grammar, in Robert Borsley and Kersti Börjars, editors, Non-Transformational Syntax, chapter 5, pp. 181-224, Blackwell.
  • [129] Andreas Stolcke (1994), Bayesian Learning of Probabilistic Language Models, Ph.D. thesis, University of California at Berkeley, Berkeley, California.
  • [130] Bruce Tesar and Paul Smolensky (1998), Learnability in Optimality Theory, Linguistic Inquiry, 29 (2): 229-268.
  • [131] Harry Tily and T. Florian Jaeger (2011), Complementing quantitative typology with behavioral approaches: Evidence for typological universals, Linguistic Typology, 15 (2): 497-508.
  • [132] Anand Venkataraman (2001), A Statistical Model for Word Discovery in Transcribed Speech, Computational Linguistics, 27 (3): 351-372.
  • [133] Christopher S. Wallace and David M. Boulton (1968), An Information Measure for Classification, Computer Journal, 11 (2): 185-194.
  • [134] Kenneth Wexler and Peter W. Culicover (1980), Formal Principles of Language Acquisition, MIT Press, Cambridge, MA.
  • [135] Colin Wilson (2006), Learning Phonology with Substantive Bias: An Experimental and Computational Study of Velar Palatalization, Cognitive Science, 30 (5): 945-982.
  • [136] Charles D. Yang (2002), Knowledge and learning in natural language, Oxford University Press.
  • [137] Charles D. Yang (2004), Universal Grammar, statistics or both?, Trends in Cognitive Sciences, 8 (10): 451-456.
  • [138] Charles D. Yang (2010), Three Factors in Language Variation, Lingua, 120: 1160-1177.
  • [139] Ryo Yoshinaka (2011), Efficient learning of multiple context-free languages with multidimensional substitutability from positive data, Theoretical Computer Science, 412 (19): 1821-1831, ISSN 0304-3975, doi: http://dx.doi.org/10.1016/j.tcs.2010.12.058.
  • [140] Willem Zuidema (2003), How the Poverty of the Stimulus Solves the Poverty of the Stimulus, in Suzanna Becker, Sebastian Thrun, and Klaus Obermayer, editors, Advances in Neural Information Processing Systems 15 (Proceedings of NIPS’02), pp. 51-58.
Uwagi
Opracowanie rekordu ze środków MNiSW, umowa Nr 461252 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2020).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-614c9dc7-8892-46c2-9897-c2e30a7975f5
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.