Submitted by V. Fascianella on Mon, 14/06/2021 - 09:27
The first paper from the BabyRhythm project was published this week in the journal Brain and Language. Using EEG data from 8-week-old infants, we showed that a machine learning approach can be used to accurately classify whether an infant was hearing rhythmic speech or a rhythmic non-speech sound. First author Samuel Gibbon achieved this by building a deep-learning based machine learning algorithm called a Convolutional Neural Network, or CNN. Significant classification was also achieved using a more traditional machine learning approach known as a Support Vector Machine (SVM). By showing that either a CNN or SVM could be used to accurately classify which rhythmic sound an infant was hearing, this work opens the door for developing biomarker-based classifier to predict development of language.
Find the paper here: https://doi.org/10.1016/j.bandl.2021.104968
Find more about our publications here: https://www.cne.psychol.cam.ac.uk/publications