Recognizing disordered speech is a challenge to Automatic Speech Recognition (ASR) systems. This research focuses on classifying disordered speech vs. non-disordered speech through signal processing coupled with machine learning techniques. We have found little evidence of ASR that correctly classifies disordered vs. ordered speech at the level of expert-based classification. This research supports the Automated Phonetic Transcription - Grading Tool (APTgt). APTgt is an online E-Learning system that supports Communications Disorders (CMDS) faculty during linguistic courses and provides reinforcement activities for phonetic transcription with the potential to improve the quality of students' learning efficacy and teachers' pedagogical experience. In addition, APTgt generates interactive practice sessions and exams, automatic grading, and exam analysis. This paper will focus on the classification module to classify disordered speech and non-disordered speech supporting APTgt. We utilize Mel-frequency cepstral coefficients (MFCCs) and dynamic time warping (DTW) to preprocess the audio files and calculate the similarity, and the Support Vector Machine (SVM) algorithm for classification and regression.