- Sonos adds the option to improve the speech of ia to arc ultra
- It is the first sound function of Sonos, with four levels of speech impulse
- It has developed with an auditory charity organization to help people with hearing loss.
Sonos has launched a new version of its speech improvement tools for Sonos Arc Ultra, which we describe as one of the best sound bars available.
You will still find these tools on the playback screen now in the Sonos application, but instead of having only a couple of options, it will now have four new modes (low, medium, high and maximum), all driven by the first use of a AI sound processing tool. They should be available today (May 12) for all users.
These modes were developed in a one -year association with the Royal National Institute for the Sorde (RNID), the leading beneficial organization of the United Kingdom for people with hearing loss. I talked to Sonos and the RNID to obtain internal history about its development here, but you can read here for more details.
The update is released today in Sonos Arc Ultra Soundbars, but it will not be available in any other sound sound bar because it requires a higher level of processing power, than the chip inside the ultra arch can provide, but the oldest sound bars cannot.
The ai element is used to analyze the sound that passes through the sound bar in real time, and separate the ‘speech’ elements of the sound so that they can become more prominent in the mixture without affecting the rest of the sound too much. I have heard it in action during a demonstration at the United Kingdom Product Development Center, and is very impressive.
If you have used speech improvement tools before, you are probably familiar with listening to the dynamic range of sound, and especially the bass, suddenly it is massively reduced in exchange for speech elements to be pushed later.
That is not the case of the new sonos mode: bass low, the general sound landscape and the most immersive dolby elements remain much better. That is for two reasons: one is that the speech is being improved separately in other parts, and the other is that it is a dynamic system that is only activated when it detects that the speech is probably drowning for the background noise.
It will not activate whether the dialogue is happening in a quiet background, or if there is no dialogue on the scene. And it is a system that works by degrees: it applies more processing in the busiest scenes, and less when the audio is not so chaotic.
How does it sound?
In the two lowest modes, the dialogue is selected more clearly without great damage to the rest of the soundtrack, depending on my demonstration.
In the high mode, the background still remained very good, but the speech began to sound a bit more processed, and in Max I could hear the background that their wings cut a little, and a little more artificiality to the speech, but the speech was extremely well chosen, and this mode is only really designed for the hard hearing.
I mentioned that the mode was developed with the RNID, which involved Sonos consulting with sound research experts in the RNID, but also making people with different types and levels of hearing loss evaluate the ways in different stages of development and provide comments.
I spoke extensively with the audio and AI architects of Sonos who developed the new ways, as well as the RNID, but the key conclusion is that the collaboration led Sonos to put more emphasis on the retention of the immersive sound effects, and add four levels of improvement instead of the originally planned three.
Despite the RNID participation, the new mode is not designed to be only for hearing. It is still called speech improvement, as it is now, and is not hidden as an accessibility tool: sound is improved for everyone, and ‘all’ now include people with mild to moderate auditory loss. Low and medium modes can also work for those of us who need additional clarity in occupied scenes.
This is not the first use of speech separation with ia that I have seen: I have experienced it on Samsung TVs and in a fun tvs philips showcase, where it was used to disable the comment during sports, but preserves the sounds of the crowd.
@Techradar ♬ Original Sound – Techradar
But it is interesting that this is the first use of sound processing of Sonos, and the four -year development process, including one year of refinement with the RNID, shows that Sonos has adopted a reflective approach to how it is better used that it is not always apparent in other AI sound processing applications. Here is my article interviewing AI developers and Sonos Audio with RNID researchers.
It is a pity that it is exclusive to Sonos Arc Ultra for now, although I am sure that the new versions of Sonos Ray and Sonos Beam Gen 2 will be a long time with the same updated chip to admit the function.
You may also like …