The goal of this study was to create and examine machine learning algorithms that adapt in a controlled and cadenced way to foster a harmonious learning environment between the user of a human-machine interface and the controlled device. In this experiment, subjects' high-dimensional finger motions remotely controlled the joint angles of a simulated planar 2-link arm, which was used to hit targets on a computer screen. Subjects were required to move the cursor at the endpoint of the simulated arm. Between each block of targets a machine learning algorithm was applied to adaptively change the transformation between finger motion and cursor motion. This algorithm was either a Least Mean Squares (LMS) gradient descent, or a Moore-Penrose Pseudoinverse (RC) transformation. In both cases, the algorithm modified the finger-angle map so as to reduce the endpoint errors measured in past performance. Subjects were divided into three groups, a control group and two test groups, each practicing cursor control under one of the algorithms. LMS subjects learned to reduce error significantly faster than the control group (no machine learning) while RC subjects failed to demonstrate learning, possibly due large mapping differences between RC updates. Results also indicate that subjects training with machine learning do not exhibit faster or better generalization to untrained movements.