AI helps scientists create ultimate noise-cancelling headphones

0

By Stephen Beech via SWNS

Artificial intelligence has helped create the ultimate noise-cancelling headphones.

Scientists say the state-of-the-art technology filters only unwanted noises.

The AI-powered headphones categorize ambient sounds – giving users the power to choose exactly what they want to hear.

Standard noise-cancelling headphones automatically identify background sounds and cancel them out for peace and quiet.

But they often fail to distinguish between unwanted background sounds and crucial information, leaving headphone users unaware of their surroundings.

Professor Shyam Gollakota is an expert in using AI tools for real-time audio processing.

His team at the University of Washington in Seattle created a system for targeted speech hearing in noisy environments.

They then developed “next-gen” AI-based headphones that selectively filter out specific sounds – while preserving others.

Gollakota said: “Imagine you are in a park, admiring the sounds of chirping birds, but then you have the loud chatter of a nearby group of people who just can’t stop talking.

“Now imagine if your headphones could grant you the ability to focus on the sounds of the birds while the rest of the noise just goes away.

“That is exactly what we set out to achieve with our system.”

Gollakota and his team combined noise-canceling technology with a smartphone-based neural network trained to identify 20 different environmental sound categories – including alarm clocks, crying babies, sirens, car horns and birdsong.

When the user selects one or more of the categories, the software identifies and plays those sounds through the headphones in real-time while filtering out everything else.

But Gollakota says making the system work seamlessly was far from straightforward.

He said: “To achieve what we want, we first needed a high-level intelligence to identify all the different sounds in an environment.

“Then, we needed to separate the target sounds from all the interfering noises.

“If this is not hard enough, whatever sounds we extracted needed to sync with the user’s visual senses since they cannot be hearing someone two seconds too late.

“This means the neural network algorithms must process sounds in real time in under a hundredth of a second, which is what we achieved.”

The researchers used the AI-powered approach to focus on human speech.

Relying on similar content-aware techniques, their algorithm can identify a speaker and isolate their voice from ambient noise in real time for clearer conversations.

Gollakota is excited to be at the forefront of the “next generation” of audio devices.

He added: “We have a very unique opportunity to create the future of intelligent hearables that can enhance human hearing capability and augment intelligence to make lives better.”

He is due to present his findings at a joint meeting of the Acoustical Society of America and the Canadian Acoustical Association in Ottawa, Canada.

 

FOX41 Yakima©FOX11 TriCities©