FreddieMeter analyzes your singing using machine learning models that have been trained on the original studio tapes of Freddie singing. The models judge your pitch (how well you hit the notes), your melody (how well you hit the notes in relation to each other), and timbre (how much your vocal style matches Freddie’s).
These models were developed by Google Research in an effort to create machine learning models that are better at understanding music. Models like these could be used in the future to help you identify any song just by humming it, or to help you learn to sing better.
For best results, try FreddieMeter in a quiet room with newer, wired headphones and a newer device. But you might still have trouble perfectly matching a voice that scientists are still trying to figure out.
All analysis of your singing happens on your device, so your vocals stay private to you. If you choose to share a video of you singing, that will get created on our servers, but after it’s created, it won’t be stored, used to train machine learning models, or used for anything else.
How well you hit the notes
How well you hit the notes in relation to each other
How much your vocal style matches Freddie’s
FreddieMeter was made with The Mercury Phoenix Trust, a charity founded by Brian May, Roger Taylor, and their manager Jim Beach in memory of Freddie Mercury, who died in 1991 from AIDS-related causes.
In the last 27 years, the Trust has funded over 1,000 projects across 56 countries in the global battle against HIV/AIDS, giving away over $17 million.