Tuesday 31 March 2015

The scatter plots for three samples (human speaking(h1), Bird Chirping(b1), Dog Barking(d1)) have been obtained.
Fig b1b1 gives the scatter plot when both the axes are values of b1.
Fig d1d1 gives the scatter plot when both the axes are values of d1.
Fig h1h1 gives the scatter plot when both the axes are values of h1.
Fig b1h1 gives the scatter plot where values of h1 are plotted on one axis and b1 on the other.
Fig d1h1 gives the scatter plot where values of h1 are plotted on one axis and d1 on the other.
Fig b1d1 gives the scatter plot where values of d1 are plotted on one axis and b1 on the other.

https://drive.google.com/folderview?id=0B3U5ydL0qUhyfkpLdFZSQzVsM0VFVXlRcmRXWUNHTkNEa2FfaE8wZWxPRlNBRmd2R0Q1bXc&usp=sharing

Saturday 28 March 2015

One for tasks was to compute the Mfcc for different sound samples.
The below link has the outputs for three samples one of a Bird Chirping(B1),Dog Bark(D1), Human Speaking(H1).

https://drive.google.com/folderview?id=0B3U5ydL0qUhyfjkyeUwwV1lXWDdxMU50YVJvWW1xMnF3Z1BHSUJZS1VOSHZ4Rmk0MVdtOW8&usp=sharing

Friday 27 March 2015

So far, we have used a python code to find the MFCCs of the samples we have collected. We are also continuing to collect more sounds of the various categories. (Our current sound sample categories include dogs barking, birds chirping, human speech, bike engines, car engines, jet sounds, crowds cheering, musical instruments - violin, piano, cello, guitar and drums.)

Our next task is to carefully study the classification of the data sets available on the UCI Machine Learning Repository page and use the knowledge to classify the samples we have using Weka. We have also been going through tutorials of Weka to familiarize ourselves with how it works.

We will shortly be uploading the samples we have collected onto a Google Drive and share the link on the blog.    

Monday 23 March 2015

Tasks over the next one week:
Mfcc for different recordings
Scatter Plots
Use UCI database as examples for testing different classifiers

Thursday 19 March 2015

Over the past few days we have been engaged in collecting more samples until we have a sizable amount, which as of now is around 200 samples of pure events (not mixed) of events ranging from human speech, air-conditioning sounds, engine sounds, birds chirping and musical instruments. 
Our next task is to identify a suitable classifier that will enable us to classify new samples based on the training set of these 200 sounds. 
We are also working on computing the MFCCs for the samples, and the classification of the samples based on the MFCCs using Weka. 
We have also been reviewing the slides on asr.cs.cmu.edu with rigor for a more thorough understanding of the underlying concepts.   

Friday 13 March 2015

In the last ten days we have collected more samples (we now have around 35 samples of environmental noise). We are also familiarizing ourselves with Weka and how to use Weka to classify these sounds. Our current task is to try and use Weka to classify the sounds that we have collected. We have also been doing thorough overviews of the course material available on asr.cs.cmu.edu .