Abstract
For brain–computer interfaces (BCIs), obtaining sufficient training data for algorithms that map neural signals onto actions can be difficult, expensive or even impossible. Here we report the development and use of a generative model—a model that synthesizes a virtually unlimited number of new data distributions from a learned data distribution—that learns mappings between hand kinematics and the associated neural spike trains. The generative spike-train synthesizer is trained on data from one recording session with a monkey performing a reaching task and can be rapidly adapted to new sessions or monkeys by using limited additional neural data. We show that the model can be adapted to synthesize new spike trains, accelerating the training and improving the generalization of BCI decoders. The approach is fully data-driven, and hence, applicable to applications of BCIs beyond motor control.
Original language | English (US) |
---|---|
Pages (from-to) | 546-558 |
Number of pages | 13 |
Journal | Nature Biomedical Engineering |
Volume | 7 |
Issue number | 4 |
DOIs | |
State | Published - Apr 2023 |
Funding
This work was supported by the National Science Foundation (grant no. CCF-1317433), C-BRIC (one of six centers in JUMP, a Semiconductor Research Corporation (SRC) program sponsored by DARPA), the Intel Corporation and the National Institutes of Health (grant nos. NIH NINDS T32 HD07418, F31 NS092356, NS053603 and NS074044). We affirm that the views expressed herein are solely our own and do not represent the views of the US government or any agency thereof.
ASJC Scopus subject areas
- Biotechnology
- Bioengineering
- Medicine (miscellaneous)
- Biomedical Engineering
- Computer Science Applications