Nonparametric learning of phonological constraints in optimality theory

Gabriel Doyle, Klinton Bicknell, Roger Levy

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Scopus citations

Abstract

We present a method to jointly learn features and weights directly from distributional data in a log-linear framework. Specifically, we propose a non-parametric Bayesian model for learning phonological markedness constraints directly from the distribution of input-output mappings in an Optimality Theory (OT) setting. The model uses an Indian Buffet Process prior to learn the feature values used in the loglinear method, and is the first algorithm for learning phonological constraints without presupposing constraint structure. The model learns a system of constraints that explains observed data as well as the phonologically-grounded constraints of a standard analysis, with a violation structure corresponding to the standard constraints. These results suggest an alternative data-driven source for constraints instead of a fully innate constraint set.

Original languageEnglish (US)
Title of host publicationLong Papers
PublisherAssociation for Computational Linguistics (ACL)
Pages1094-1103
Number of pages10
ISBN (Print)9781937284725
DOIs
StatePublished - 2014
Event52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014 - Baltimore, MD, United States
Duration: Jun 22 2014Jun 27 2014

Publication series

Name52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014 - Proceedings of the Conference
Volume1

Other

Other52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014
CountryUnited States
CityBaltimore, MD
Period6/22/146/27/14

ASJC Scopus subject areas

  • Language and Linguistics
  • Linguistics and Language

Fingerprint Dive into the research topics of 'Nonparametric learning of phonological constraints in optimality theory'. Together they form a unique fingerprint.

Cite this