The QUT Ecoacoustics Symposium 2022 is offering the following workshops:
Ecoacoustics Basics Workshop
This workshop is designed for students and those new to ecoacoustics methods. This 2-hour workshop will cover the nature of sound, sound recording and recorders, sound representation (files, formats and manipulation), playing sound and basic effects and analyses (filtering, plotting, and more), freely available tools, calculating and interpreting acoustic indices, and false-colour spectrograms.
One of our facilitators will be Dr Anthony Truskinger. Anthony is a Research Software Engineer dedicated to improving Ecoacoustics software. He develops and maintains Ecosounds, the Australian Acoustic Observatory, QUT’s Analysis Programs, and EMU (the Ecoacoustic Metadata Utility). Anthony is interested in sensor metadata, large scale analysis, and helping other researchers. Anthony has been a part of the QUT Ecoacoustics Research Group since 2009
Note: The Practitioner and Basics workshops are offered as alternatives for the afternoon of the first day of the program.
This workshop will be broken into four 30 min sessions, covering:
- Sound basics: understanding sound
- Sound recording and labelling: fauna calls, sound playing, and annotation using Audacity and Raven
- Acoustic indices: introduction to and generation of false-colour spectrograms and indices
- Sound file wrangling: slicing, dicing, chopping, resampling, compressing, and more, with an introduction to command line and graphical tools
Ecoacoustics Practitioner Workshop
This workshop is aimed at the experienced ecoacoustics practitioner. The objective of this 2-hour workshop is to create a best practice guideline for terrestrial vertebrate ecoacoustic surveys in Australia. A diverse range of practitioners from non-profit, government and academic sectors will be invited to contribute their experiences. This will ensure that outputs are relevant to real-world monitoring. The intention is to publish the workshop’s findings as a peer-reviewed paper, but also to work with relevant stakeholders (government and non-profit organisations) to help encourage the adoption of best practice methods.
Our facilitators include Assoc. Prof. Susan Fuller and Dr Dani Teixeira.
Assoc. Prof. Susan Fuller is passionate about protecting our ecosystems and biodiversity through the use of interdisciplinary and innovative technological approaches in ecological research. She has a particular interest in using ecoacoustic innovations to monitor changes in ecosystem health and impacts on biodiversity.
Dr Dani Teixeira is a Research Fellow in Applied Ecology at the Queensland University of Technology and Bush Heritage Australia, interested in animal vocal behaviour, threatened species monitoring and soundscape ecology. She has worked with various organisations, including QUT’s Ecosounds Lab, the Australian Acoustic Observatory, Griffith University, CSIRO, University of the Sunshine Coast, the Queensland and NSW Governments, Birdlife Australia, the National Malleefowl Recovery Team and the Glossy Black Conservancy.
We will break into groups based on taxon expertise/experience (frogs, birds, mammals, insects, soundscape) and discuss key lessons learned for monitoring different taxa using acoustics (design, deployment and analysis). Particular focus will be on identifying key challenges and mistakes, and commonalities in solutions or approaches.
We will then break into groups based on ecoacoustic monitoring program/application (species occupancy, species distribution, detection of species that are cryptic, rare or low abundance, indicator monitoring, species assemblages/ community, citizen science/ community engagement, ecosystem health/ condition, noise (technophony). In groups we will discuss key challenges and lessons learned for using acoustic monitoring for different purposes.
Make Your Own Recogniser Workshop
As part of the QUT Ecoacoustics symposium we will be running a one day workshop for building automated call recognisers. This will be a limited place workshop, suited to ecologists and landholders who are already collecting acoustic data and wish to analyse it. The goal of the workshop is for each participant to leave with a working call recogniser for a single species-call of their choice. The workshop will cover some basic theory and practise of building a call recogniser using a convolutional neural network.
Our facilitators include Dr Philip Eichinski, and Dr Lance De Vine.
Dr Eichinski is a postdoctoral researcher and research software engineer specialising in machine learning for species call recognition. See his most recent publication: “A Convolutional Neural Network Bird Species Recognizer Built From Little Data by Iteratively Training, Detecting, and Labeling”.
Dr De Vine is a researcher in data science and machine learning with applications to multi-disciplinary research. Lance has been working on building species recognisers for five threatened species in the Gondwana Rainforests of Australia WHA, as well as the Glossy Black Cockatoo.
This workshop will cost an additional $75.
- This is a face to face workshop and all attendees will be required to register for the main symposium, as well as attending the workshop on Friday.
- Participants will need to attend an online pre-workshop session and to bring along example labelled call data for the workshop (20-200 example calls, and 20-100 negative examples from real recordings), plus a few hours of unlabelled audio that is known to contain your call of interest; exact details will be provided to participants.
- To undertake the workshop participants need to be comfortable with executing simple command line programs; it’s also advantageous to have some programming experience (e.g. R or Python) although programming experience is not required. All the tools used to build the recogniser will be freely available, as will the workshop notes.
- This is part of an open science initiative. Participants need to be willing to make their recogniser available under an open-source licence (Apache 2.0 https://opensource.org/licenses/Apache-2.0) so that others may use it, and for the recogniser to be published in a registry.
- Submission and acceptance of an Expression of Interest; details below.
Expression of Interest
Please email email@example.com the following:
- Your name, role and organisation
- A short description of your project including goals of the project, acoustic data you are collecting, species and call of interest. Please include a link to an example of the call you are interested in detecting or a short sound snippet containing the call.
- Confirmation that you will make your recogniser available under an open-source license (Apache 2.0 https://opensource.org/licenses/Apache-2.0)
- Include ‘Recogniser workshop EOI’ in the subject line.
The deadline for submitting EOIs is Friday 16th September. We will notify successful applicants by Wednesday 21st September.