Blog
In the first of our new blog series, our CTO Steve Robbins shares an overview of our technology’s evolution and how it has underpinned our royalty revolution.
7th March 2025
I’d like to introduce you to the technology we’ve built entirely in-house at Audoo over the last few years and give you a flavour of some of the technology choices our team has made along the way. We’ll be following up in the future with more detail on various aspects of how we do things - design, security and privacy, automated testing and some specialised projects and partnerships.
Platform
The cloud platform is centre of everything we do at here Audoo, securely gathering data from our Audio Meters to identify tracks being played, capturing analytics and metrics to ensure operational health, aggregating world-first data for reports and delivery to partners through dashboards and file transfer.
The platform itself is hosted with Amazon Web Services (AWS) and has been built with Clojure. We love Clojure as it’s quick and enjoyable to get work done, produces efficient code and is great for working with and transforming data. The platform has been designed using the "microservices" architectural approach, meaning we divide the overall platform into smaller parts (a “microservice”), each of which are responsible for a specific area.
For example, we have individual microservices for user account management, analytics event capture, identifying fingerprints, Audio Meter health checking and so on. Using this architectural technique works really well for our needs, allowing scaling of areas that need more processing power when needed and rapidly iterating on features separately.
The platform has over 25 individual microservices so far, all sharing a common design pattern for monitoring, logging, deployment and the APIs they expose – allowing us to concentrate on the functionality rather than the infrastructure. The microservices are tested in isolation as part of our Continuous Delivery process to ensure quality before deployment, then also as part of regular platform tests that drive alerts in Slack and our status page that we host publicly at https://status.audoo.com/
Audio Meters
The public face of our technology is of course our Audio Meter™, which has been many years in the making. We’ve had dozens of iterations on the external housing and the internal circuit board to get to the reliable, manufacturable, sleek and easy-to-install version that we now manufacture here in the UK.
An early prototype circuit board that was used to prove the concept can be seen below, measuring over 30cm2, with microphones sticking out on the edges. The team worked long and hard with external specialists to get to the multi-layer board, of which we have now built thousands measuring approx. 10cm2 x 4.5cm2 with the microphones directly on the board.
We’ll come back to our manufacturing process in a future post – covering the testing we put every board through as it’s built and assembled into the finished Audio Meter. We love data and visibility of it, so we have real-time test results from our factory on an internal manufacturing dashboard where we can see boards at each test stage and the detailed test results as the factory run through the process. Below you can see a board that has failed on the second testing stage as well as an example of the detail captured in the audio test stage.
The embedded software that controls the Audio Meter is developed in a combination of Python, C and C++. The design goals for the software are to ensure reliability, privacy, security, simplicity and robustness, all while using as little power as possible – in fact, our Audio Meters consume less power than a standard USB phone charger. Our aim is for the Audio Meter to be plugged in and for it to “just work”… we have cellular connectivity inside, so as soon as it has power we can connect to our cloud platform and get recognising.
Operating in the opposite way to smart speakers (that need to isolate and recognise voice commands that are spoken), we want to remove speech and noise from the audio to isolate the music being played. As part of this audio processing, we run a machine learning model on the Audio Meter to give a probability score on whether music is playing or not, capture sound levels and other metrics. The audio from microphones is never stored on the Audio Meter, nor is it sent to our cloud platform. We instead generate and send an “acoustic fingerprint” to our cloud to match and identify the music that is playing - this ensures privacy for the venues we are installed in. As part of our reliably efforts – for example, working around occasionally poor reception – Audio Meters are capable of offline fingerprinting, where they store the fingerprints securely to retry until we know they have been received by our cloud platform. In a future post, we’ll go into detail on the automated testing and checks we carry out when releasing software updates for the Audio Meters around the world.
Dashboards and Apps
You will have seen a few screenshots of our internal web dashboard in this blog - we use this tool to manage venues and installations, provide support, diagnose and fix problems, manage catalogue metadata, get ad-hoc reports / insights and so much more. We use Ionic that allows the same codebase to be used to also provide a mobile app that is used by installation partners. We’re immensely proud of our internal tooling and always get great feedback when demoing to partners and investors. Last year we also launched an externally accessible dashboard for partners to use, which allows users to dive into reports and real-world data insights from all over the globe.
Our design process is lightweight but effective – moving from understanding the product requirements to wireframes, to hi-fidelity click-through demos in Figma. Once the team has agreed the flow, our cloud and data team work on APIs that are marked up as comments on the design in parallel with the UI build-out.
Reporting and Data
As our central product is reporting the music played to our partners, naturally a huge part of our platform centres around processing the millions of analytics events we receive every day to build out the reports. We make use of several AWS services to drive this - central to this is a Redshift data warehouse, with a Kinesis stream providing real-time on data, then Step Functions, Elastic Map Reduce and loading / reporting code to process batch runs overnight. As catalogue data is constantly updated and improved, during report delivery, we take a “snapshot” copy of the data being delivered to ensure analytics dashboards are based on the same data delivered to partners. We load and transform data to enable our dashboard insight use cases through a Postgres caching database to give the performance we need for these views.
Summary
I’m incredibly proud of the team and what we’ve built at Audoo over the last few years – taking Ryan’s vision from a proof of concept to a super reliable product that we’re rolling out around the world.
I hope you enjoyed this introduction to what we’ve built and look out for more detailed follow up posts coming soon!
Cheers,
Steve.