This experiment is part of the AI for Interspecies Communication Challenge Grant. Browse more projects

What types of sculptural visualisations of Beluga whales communications latent representations are the most engaging?

$3,000
Raised of $3,000 Goal
100%
Funded on 12/16/23
Successfully Funded
  • $3,000
    pledged
  • 100%
    funded
  • Funded
    on 12/16/23

About This Project

AI allows humans to study the conversations of other species. It forages in bio-acoustic datasets for patterns in non-human communications previously hidden from our ears. As the inner structures of how other species communicate are being mapped. How can researchers and the public navigate, discuss and experience these high dimension latent acoustic spaces? Our research is a public outreach project looking at how AI models of beluga vocalisations can be visualised into physical format.

Ask the Scientists

Join The Discussion

What is the context of this research?

Living alongside the work developed by ESP with bioacousticians Valeria Vergara and Jaclyn Aubin on beluga whales communications, we propose an immersive visualisation exploring the architecture of whale communication as modelled with AI.

We will be working with AI model AVES, the foundation model created by ESP already trained on beluga data, using the model output as raw material for creating a series of digital sound sculptures, enabling users to explore latent descriptions of beluga calls through hearing, sight, touch and intuition.

Our proposition is a public outreach experience about the future of audio visualisations for interspecies communication. Its success will be measured via assessment of public engagement and feedback through social media.


What is the significance of this project?

Traditional sound visualisations represent audio properties that matter to the ear (level, frequency). Although machine listening incorporates these parameters, the AVES model operates on internal representations of up to 1024 dimensions, that are not accounted for in waveforms or spectrograms.

What human words fail to depict, what our brain fails to grasp, art can help us engage with the unreachable. So as to navigate latent spaces, creative strategies such as experience design, immersive practices and data storytelling can help us design sensorial, collective and intuitive interfaces.

Our objective is for our visualisation to become a platform for scientists to explore the notion of "artificial listening" in discussion with a diversity of audiences.

What are the goals of the project?

Our prototype will be developed by slicing individual beluga vocalisations in time, apply dimensionality (e.g. PCA, t-SNE, UMAP) reduction to slices, to feed data into a creative coding environnement (e.g. Blender, VVVV, TouchDesigner) and translate beluga vocalisations into shape, texture, light and more. Different dimensionality reductions and data visualisation approaches will be explored.

Once our visualisation process and aesthetics are established, the same translation key will be applied to other whale vocalisations such maintaining consistency across an entire dataset.

Our realisation will be evaluated through audience engagement measurements and feedback through social media platforms, using a short film presentation of our latent beluga whale visualisations.

Budget

Please wait...

For this project, we will be working with AI model AVES, the foundation model created by ESP, and using the model output as raw material for creating a series of digital sculptures, enabling users to explore latent descriptions of beluga whale calls through hearing, sight, touch and intuition.

The travel costs will allow for all 3 participants in the project to meet in person at our media workspace to prototype initial ideas at the intersection of coding and visual design.

Based on this initial group practice meeting, the model output will be prepared to be interfaced with a visual digital design environment (Blender and Touch Designer) to produce artistically curated visual expressions of the latent acoustic space.

This phase will lead to the production of a short film showing our selected results that we will communicate using social media and the press using the project communication budget.

Endorsed by

This is a very exciting project and a wonderful illustration of a bold and impactful art-science collaboration. I fully endorse this work and I can't wait to see where these ideas go next!

Project Timeline

Our team will work according to a practice based research approach, as part of which fast-prototyping allows for user testing at every step in order to adjust the artistic experience. Our beluga whale vocalisation sculpture will be delivered as a short film and software prototype. Exploration phase consistions of experiment to define artistic directions. Development of a functional technical prototype. Production of a functioning refined design proposal.

Nov 16, 2023

Project Launched

Jan 31, 2024

Exploration

Mar 31, 2024

Development

May 31, 2024

Production

Meet the Team

Antoine Bertin
Antoine Bertin
Artist
Cristina Tarquini
Cristina Tarquini
Creative Director & Technologist
Marianne de Heer Kloots
Marianne de Heer Kloots
Scientist

Team Bio

Our team is composed of artist and composer Antoine Bertin, computer scientist Marianne de Heer Kloots (computer scientist), and interaction designer Cristina Tarquini.

Antoine Bertin

Antoine Bertin is a european artist working at the intersection of science and sensory immersion, field recording and sound storytelling, data and music composition. His creations take the form of listening experiences, immersive moments and audio meditations exploring our relationships with the living world. His work has been presented at Tate Britain, Palais de Tokyo, Serpentine Gallery, KIKK festival, STRP festival, Sonar+D, CCCB Barcelona, Dutch Design Week, Nuit Blanche Paris, le 104, Centre Wallonie Bruxelles, Gaité Lyrique. He produces a quarterly show called “Edge of the forest” on NTS radio weaving together field recordings, data sonifications and science inspired meditations. He is an alumni of the Diverse Intelligences Summer Institute residency.

Cristina Tarquini

Cristina Tarquini is a Creative Director & Technologist. She transforms communication challenges into meaningful and memorable experiences. A tech and art lover, she blends the digital with the physical. Her work has been shown internationally at Somerset House, Ars Electronica and featured in best of Google Design 2020.

Marianne de Heer Kloots

Marianne de Heer Kloots is a computational linguist and cognitive scientist who is interested in studying and using artificial intelligence technologies to better understand human and non-human minds and languages. She completed a BA in Linguistics at Leiden University followed by an MSc in Brain & Cognitive Sciences at University of Amsterdam, and is currently a PhD candidate at the Institute for Logic, Language and Computation in Amsterdam. Marianne’s PhD research computationally explores the inner workings of deep neural networks for audio and text processing, as well as their use in modelling human cognitive signals. Previously, Marianne has run experiments to study human artificial language learning, as well as data-driven bioacoustic analyses to study the structure and timing of grey seal pup vocalizations when housed in groups.

Lab Notes

Nothing posted yet.

Additional Information

In conclusion, our project explores the intersection of AI and interspecies communication through creative practices to develop, inform and communicate new ways of listening to the living world brought about by AI.

Following the realization of this virtual prototype, our team will seek additional opportunities to make these visualisations into physical sculptures (e.g. through 3D printing) and bring them together into a large scale art installation representing the architectures of beluga whale dialects these vocalisation are being a part of.


Project Backers

  • 3Backers
  • 100%Funded
  • $3,000Total Donations
  • $1,000.00Average Donation
Please wait...