skip to content

Centre for Neuroscience in Education


The first paper from the BabyRhythm project was published this week in the journal Brain and Language. Using EEG data from 8-week-old infants, we showed that a machine learning approach can be used to accurately classify whether an infant was hearing rhythmic speech or a rhythmic non-speech sound. First author Samuel Gibbon achieved this by building a deep-learning based machine learning algorithm called a Convolutional Neural Network, or CNN. Significant classification was also achieved using a more traditional machine learning approach known as a Support Vector Machine (SVM). By showing that either a CNN or SVM could be used to accurately classify which rhythmic sound an infant was hearing, this work opens the door for developing biomarker-based classifier to predict development of language.

Find the paper here:

Find more about our publications here: