Action recognition from skeleton data via analogical generalization over qualitative representations

Kezhen Chen, Kenneth D Forbus

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Human action recognition remains a difficult problem for AI. Traditional machine learning techniques can have high recognition accuracy, but they are typically black boxes whose internal models are not inspectable and whose results are not explainable. This paper describes a new pipeline for recognizing human actions from skeleton data via analogical generalization. Specifically, starting with Kinect data, we segment each human action by temporal regions where the motion is qualitatively uniform, creating a sketch graph that provides a form of qualitative representation of the behavior that is easy to visualize. Models are learned from sketch graphs via analogical generalization, which are then used for classification via analogical retrieval. The retrieval process also produces links between the new example and components of the model that provide explanations. To improve recognition accuracy, we implement dynamic feature selection to pick reasonable relational features. We show the explanation advantage of our approach by example, and results on three public datasets illustrate its utility.

Original languageEnglish (US)
Title of host publication32nd AAAI Conference on Artificial Intelligence, AAAI 2018
PublisherAAAI press
Pages638-645
Number of pages8
ISBN (Electronic)9781577358008
StatePublished - Jan 1 2018
Event32nd AAAI Conference on Artificial Intelligence, AAAI 2018 - New Orleans, United States
Duration: Feb 2 2018Feb 7 2018

Publication series

Name32nd AAAI Conference on Artificial Intelligence, AAAI 2018

Other

Other32nd AAAI Conference on Artificial Intelligence, AAAI 2018
CountryUnited States
CityNew Orleans
Period2/2/182/7/18

Fingerprint

Learning systems
Feature extraction
Pipelines

ASJC Scopus subject areas

  • Artificial Intelligence

Cite this

Chen, K., & Forbus, K. D. (2018). Action recognition from skeleton data via analogical generalization over qualitative representations. In 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 (pp. 638-645). (32nd AAAI Conference on Artificial Intelligence, AAAI 2018). AAAI press.
Chen, Kezhen ; Forbus, Kenneth D. / Action recognition from skeleton data via analogical generalization over qualitative representations. 32nd AAAI Conference on Artificial Intelligence, AAAI 2018. AAAI press, 2018. pp. 638-645 (32nd AAAI Conference on Artificial Intelligence, AAAI 2018).
@inproceedings{fe78b314e71c423aa028ce9c4efc1df1,
title = "Action recognition from skeleton data via analogical generalization over qualitative representations",
abstract = "Human action recognition remains a difficult problem for AI. Traditional machine learning techniques can have high recognition accuracy, but they are typically black boxes whose internal models are not inspectable and whose results are not explainable. This paper describes a new pipeline for recognizing human actions from skeleton data via analogical generalization. Specifically, starting with Kinect data, we segment each human action by temporal regions where the motion is qualitatively uniform, creating a sketch graph that provides a form of qualitative representation of the behavior that is easy to visualize. Models are learned from sketch graphs via analogical generalization, which are then used for classification via analogical retrieval. The retrieval process also produces links between the new example and components of the model that provide explanations. To improve recognition accuracy, we implement dynamic feature selection to pick reasonable relational features. We show the explanation advantage of our approach by example, and results on three public datasets illustrate its utility.",
author = "Kezhen Chen and Forbus, {Kenneth D}",
year = "2018",
month = "1",
day = "1",
language = "English (US)",
series = "32nd AAAI Conference on Artificial Intelligence, AAAI 2018",
publisher = "AAAI press",
pages = "638--645",
booktitle = "32nd AAAI Conference on Artificial Intelligence, AAAI 2018",

}

Chen, K & Forbus, KD 2018, Action recognition from skeleton data via analogical generalization over qualitative representations. in 32nd AAAI Conference on Artificial Intelligence, AAAI 2018. 32nd AAAI Conference on Artificial Intelligence, AAAI 2018, AAAI press, pp. 638-645, 32nd AAAI Conference on Artificial Intelligence, AAAI 2018, New Orleans, United States, 2/2/18.

Action recognition from skeleton data via analogical generalization over qualitative representations. / Chen, Kezhen; Forbus, Kenneth D.

32nd AAAI Conference on Artificial Intelligence, AAAI 2018. AAAI press, 2018. p. 638-645 (32nd AAAI Conference on Artificial Intelligence, AAAI 2018).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Action recognition from skeleton data via analogical generalization over qualitative representations

AU - Chen, Kezhen

AU - Forbus, Kenneth D

PY - 2018/1/1

Y1 - 2018/1/1

N2 - Human action recognition remains a difficult problem for AI. Traditional machine learning techniques can have high recognition accuracy, but they are typically black boxes whose internal models are not inspectable and whose results are not explainable. This paper describes a new pipeline for recognizing human actions from skeleton data via analogical generalization. Specifically, starting with Kinect data, we segment each human action by temporal regions where the motion is qualitatively uniform, creating a sketch graph that provides a form of qualitative representation of the behavior that is easy to visualize. Models are learned from sketch graphs via analogical generalization, which are then used for classification via analogical retrieval. The retrieval process also produces links between the new example and components of the model that provide explanations. To improve recognition accuracy, we implement dynamic feature selection to pick reasonable relational features. We show the explanation advantage of our approach by example, and results on three public datasets illustrate its utility.

AB - Human action recognition remains a difficult problem for AI. Traditional machine learning techniques can have high recognition accuracy, but they are typically black boxes whose internal models are not inspectable and whose results are not explainable. This paper describes a new pipeline for recognizing human actions from skeleton data via analogical generalization. Specifically, starting with Kinect data, we segment each human action by temporal regions where the motion is qualitatively uniform, creating a sketch graph that provides a form of qualitative representation of the behavior that is easy to visualize. Models are learned from sketch graphs via analogical generalization, which are then used for classification via analogical retrieval. The retrieval process also produces links between the new example and components of the model that provide explanations. To improve recognition accuracy, we implement dynamic feature selection to pick reasonable relational features. We show the explanation advantage of our approach by example, and results on three public datasets illustrate its utility.

UR - http://www.scopus.com/inward/record.url?scp=85060442252&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85060442252&partnerID=8YFLogxK

M3 - Conference contribution

T3 - 32nd AAAI Conference on Artificial Intelligence, AAAI 2018

SP - 638

EP - 645

BT - 32nd AAAI Conference on Artificial Intelligence, AAAI 2018

PB - AAAI press

ER -

Chen K, Forbus KD. Action recognition from skeleton data via analogical generalization over qualitative representations. In 32nd AAAI Conference on Artificial Intelligence, AAAI 2018. AAAI press. 2018. p. 638-645. (32nd AAAI Conference on Artificial Intelligence, AAAI 2018).