Recurrent neural networks for classifying relations in clinical notes

Research output: Contribution to journalArticlepeer-review

59 Scopus citations


We proposed the first models based on recurrent neural networks (more specifically Long Short-Term Memory - LSTM) for classifying relations from clinical notes. We tested our models on the i2b2/VA relation classification challenge dataset. We showed that our segment LSTM model, with only word embedding feature and no manual feature engineering, achieved a micro-averaged f-measure of 0.661 for classifying medical problem-treatment relations, 0.800 for medical problem-test relations, and 0.683 for medical problem-medical problem relations. These results are comparable to those of the state-of-the-art systems on the i2b2/VA relation classification challenge. We compared the segment LSTM model with the sentence LSTM model, and demonstrated the benefits of exploring the difference between concept text and context text, and between different contextual parts in the sentence. We also evaluated the impact of word embedding on the performance of LSTM models and showed that medical domain word embedding help improve the relation classification. These results support the use of LSTM models for classifying relations between medical concepts, as they show comparable performance to previously published systems while requiring no manual feature engineering.

Original languageEnglish (US)
Pages (from-to)85-95
Number of pages11
JournalJournal of Biomedical Informatics
StatePublished - Aug 2017


  • Long Short-Term Memory
  • Machine learning
  • Medical relation classification
  • Natural language processing
  • Recurrent neural network

ASJC Scopus subject areas

  • Computer Science Applications
  • Health Informatics

Fingerprint Dive into the research topics of 'Recurrent neural networks for classifying relations in clinical notes'. Together they form a unique fingerprint.

Cite this