Google’s new AI model could one day allow him to understand and talk to Dolphins



  • Google and the Wild Dolphin project have developed a trained model to understand the vocalizations of the dolphins
  • Dolphingemma can work directly on pixel smartphones
  • Will be open source this summer

During most of human history, our relationship with Dolphins has been a unilateral conversation: we spoke, squeak and settle how we understood each other before throwing a fish. But now, Google has a plan to use AI to unite that division. Working with Georgia Tech and The Wild Dolphin Project (WDP), Google has created Dolphingemma, a new model of the IA trained to understand and even generate dolphins talk.

The WDP has been collecting data on a specific group of spotted dolphins from the wild Atlantic since 1985. The Bahamas headquarters has provided large amounts of audio, video and behavior notes as researchers have observed them, documenting each shower and buzz and trying to rebuild what everything means. This audio treasure is now feeding with Dolphingemma, which is based on the family of Google Open Gemma models. Dolphingemma takes dolphin sounds as an entrance, processes them using audio tokenizers such as Soundstream, and predicts what vocalization could come next. Imagine autocomplete autocomplete, but for dolphins.

Leave a Comment

Your email address will not be published. Required fields are marked *