The human brain is not only capable of detecting sounds, but also of locating them in space. Thus, anyone can perceive where a car that has just honked its horn or someone who passed laughing on the other side of the street is.

Neuroscientists at MIT have been given the task of reproducing this ability of human beings in a computer model. That model is made up of convolutional neural networks that can localize sound and struggle just like humans do when they don’t perceive where a noise is coming from.

What advances does this localization model present?

ear perceiving sound
Via Pixabay

Our brain is capable of perceiving sound waves. However, it is also difficult for him to differentiate between them when he is in an environment full of echoes or simultaneous sounds.

Many scientists over time have attempted to replicate the human brain’s ability to localize sounds. Some models have been successful in closed environments, but never in those real world scenarios. That is why the MIT scientists have set out to develop a more complex model that achieves this goal.

Thus, these researchers decided to use convolutional neurons. These types of neurons are usually managed with different architectures, so to arrive at the ideal model, they had to go through more than 1,500 models. Of which they only kept 10 to carry out the rest of their studies.

The tests were carried out in a virtual world that they could modify at will. An interesting fact is that they made sure that the model began to perceive the sound as humans do, and for this they executed each sound through a specialized mathematical function.

After training the models, the scientists decided to test them in the real world. And they got this result: the models performed just like the humans when they were asked to locate the sounds.

[mb_related_posts2]

They managed to find patterns similar to the human brain

One of the main similarities found by the MIT scientists was the following: the models had differences in time and level between the two right and left ears, in the same way that people experience it, since they depended on the frequency.

The second coincidence in relation to the behavior of humans when locate the sound was this: when there were multiple sound frequencies, the performance of the models decreased thanks to the difficulty that this represented.

They also discovered something important when they subjected their models to unnatural conditions: the behavior of each of them varied according to the conditions in which they were found. It should be noted that they were exposed to a world without echoes and one where you never heard more than one sound at a time. Thanks to this experiment, the scientists were able to support the theory that the localization capacity of the brain adapts to the environments in which the human being frequents.

The lead author of this discovery is Andrew Francl and he hopes to use these findings to further explore other fields of hearing.