Hey, I’m Sam, and I’ve spent the last month researching and analysing the efficiency of Blind Source separation algorithms and their applications in EEG’s (Electroencephalograms), these EEG’s use electrodes to pick up signals being released by the brain. The plan was to attempt to create the first online BCI (Brain Computer Interface) procedure at which the data would be analysed and the artefacts removed prior to an event occurring. The challenge works a little like this; You’re at a party, and you want to hear what S(t) is saying at the other end of the room. The music W(t) is loud and all you end up hearing is a little of both, W(t)*S(t). You’re memory of this event we’ll call X(t). Now let’s say lots of people we’re trying to hear what S(t) was saying, together using mutual independence you can work out from various X(t)’s what the music was and what S(t) was saying. This is similar to ICA (Independent component analysis). ICA breaks down multivariate signals (X(t)) into their independent components. Now picture the Brain as the party, Neurons are firing off everywhere and there’s eye blinks going on all the time. Eye blinks are one of the most devastating transformations that can occur within an EEG signal. An eye blink is orders of magnitude larger in voltage than the brain signals and can through off your data considerably due to their proximity to the electrodes and embedded nature. Using ICA you can break down the EEG data you got and selectively remove the eye blink component of that data. Then by putting all the components back together produce the reconstructed clean signal. The risk of doing this however is that there’s a chance you’ll add more artefacts than you’ll remove. Regardless, it’s a risk that must be taken in order to obtain comprehensible data. I hope this gives you a little insight into the nature of EEG data and the types of pre-processing that must be done before analysis of data. Safe to say it can be a long process.