The autonomous mobility of mobile robots has great contribution to human beings exploring haz- ardous terrains. The motivation of this thesis is to detect different types of terrains traversed by a robot based on acoustic data from the robot-terrain interaction thereby helping to make the mobile robots more autonomous. The acoustic data was collected using a microphone mounted on our robot. Then, these recorded datasets were used to train classifiers so as to distinguish different terrain types from one another. Different acoustic features and classifiers were investi- gated, such as Mel-frequency cepstral coefficient and Gamma-tone frequency cepstral coefficient for the feature extraction, and Gaussian mixture model and Feed forward neural network for the classification. We analyze the system’s performance by comparing our proposed techniques with some other features surveyed from distinct related works. Thus, we demonstrate the effectiveness of our approach using five different terrain classes which are trained using real data sets gathered from different ground surfaces. The experimental result indicates the average accuracy obtained is approximately 93.6% and it is enhanced to 95.2% with an increase in audio duration. In real applications, it is better to decrease the detection time and our system still has satisfactory performance using human-like terrain labeling even for smaller audio duration. These are very promising results which show that acoustics is an interesting domain that needs to be extensively explored to improve the autonomy of tracked robots.