Twitch has finally released our music recognition extension!
It allows broadcasters to show what music is playing on the stream and posts all the song with the lis.tn links to the chat.
Let us know if you want to test it out (or if you know someone who would want to test it) or if you have an active recognition for streams subscription and want to use it.
By the way, we'll be releasing the source code of the extension at some point. Aside from Twitch, it can also be used on websites of radio stations to show the currently playing song using longpolling. Let us know if you want the source code, too.
Contact @AudDhelp or hello@audd.io
It allows broadcasters to show what music is playing on the stream and posts all the song with the lis.tn links to the chat.
Let us know if you want to test it out (or if you know someone who would want to test it) or if you have an active recognition for streams subscription and want to use it.
By the way, we'll be releasing the source code of the extension at some point. Aside from Twitch, it can also be used on websites of radio stations to show the currently playing song using longpolling. Let us know if you want the source code, too.
Contact @AudDhelp or hello@audd.io
Twitch
Twitch is the world's leading video platform and community for gamers.
We donated $5000 to the Machine Intelligence Research Institute.
The long-term future of humanity is extremely valuable. The scientists evaluate the chance of an AI destroying the enormous potential of the human race as 1 in a 6, which is orders of magnitude greater than the risks posed by bioterrorism or global warming.
MIRI works on the prevention of the existential risks related to smarter-than-human AIs. They do research in math, machine learning, game theory and solve a lot of extremely interesting problems about utility functions and decision making.
If you decide to donate certain resources or money to charity, it's better to make the highest possible positive impact with the resources you devoted.
If your aim is to save lives or make them better right now, there are organizations like GiveWell.org that help with that.
But the support of MIRI and other organizations (e.g. Future of Humanity Institute at Oxford) focused on preserving the long-term potential of humanity saves a lot more lives than could save the support of usual charities, even the most effective ones of them.
Support MIRI: intelligence.org/donate/
The long-term future of humanity is extremely valuable. The scientists evaluate the chance of an AI destroying the enormous potential of the human race as 1 in a 6, which is orders of magnitude greater than the risks posed by bioterrorism or global warming.
MIRI works on the prevention of the existential risks related to smarter-than-human AIs. They do research in math, machine learning, game theory and solve a lot of extremely interesting problems about utility functions and decision making.
If you decide to donate certain resources or money to charity, it's better to make the highest possible positive impact with the resources you devoted.
If your aim is to save lives or make them better right now, there are organizations like GiveWell.org that help with that.
But the support of MIRI and other organizations (e.g. Future of Humanity Institute at Oxford) focused on preserving the long-term potential of humanity saves a lot more lives than could save the support of usual charities, even the most effective ones of them.
Support MIRI: intelligence.org/donate/
intelligence.org
Donate - Machine Intelligence Research Institute
Support MIRI’s research. Find out if your employer will match donations! Donate using ACH, PayPal, digital currency. MIRI is a 501(c)(3) nonprofit.
AudD
Twitch has finally released our music recognition extension! It allows broadcasters to show what music is playing on the stream and posts all the song with the lis.tn links to the chat. Let us know if you want to test it out (or if you know someone who would…
Released version two of our Twitch extension.
No external setup is now needed. Users just pay $45 or past API tokens, insert a browser source in OBS or other compatible streaming software, and that's it. The recognition starts once they start their Twitch broadcast.
Powered by the AudD Music Recognition for streams.
If you want to test the extension for free, reach out to hello@audd.io
No external setup is now needed. Users just pay $45 or past API tokens, insert a browser source in OBS or other compatible streaming software, and that's it. The recognition starts once they start their Twitch broadcast.
Powered by the AudD Music Recognition for streams.
If you want to test the extension for free, reach out to hello@audd.io
You can now turn on the auto-renewal of your subscription on the Dashboard.
Important: We decided to automatically renew subscriptions of those users who send requests after their subscription ends. That would help you avoid interruptions in the service. We know that many of our customers don't use Telegram a lot, and therefore they don't receive messages from our Telegram bot. It leads to many accidental losses of access to the API because some customers only renew their subscriptions when they get errors from the API.
If you don't send any API requests in two days after your subscription period has ended and haven't manually enabled the auto-renewals, we won't automatically renew your subscription, and you'll get unsubscribed.
If you want to unsubscribe but aren't able to stop sending requests, please let us know.
If you get charged without wanting to, please let us know, and we'll fully refund the payment.
You can contact us with any questions: @AudDhelp or hello@audd.io.
Important: We decided to automatically renew subscriptions of those users who send requests after their subscription ends. That would help you avoid interruptions in the service. We know that many of our customers don't use Telegram a lot, and therefore they don't receive messages from our Telegram bot. It leads to many accidental losses of access to the API because some customers only renew their subscriptions when they get errors from the API.
If you don't send any API requests in two days after your subscription period has ended and haven't manually enabled the auto-renewals, we won't automatically renew your subscription, and you'll get unsubscribed.
If you want to unsubscribe but aren't able to stop sending requests, please let us know.
If you get charged without wanting to, please let us know, and we'll fully refund the payment.
You can contact us with any questions: @AudDhelp or hello@audd.io.
We're experiencing a downtime caused by a major OVHCloud outage.
We'll try to restore the services that were dependent on the OVHCloud infrastructure from the latest backups.
We'll try to restore the services that were dependent on the OVHCloud infrastructure from the latest backups.
AudD
We're experiencing a downtime caused by a major OVHCloud outage. We'll try to restore the services that were dependent on the OVHCloud infrastructure from the latest backups.
The API for audio files (both usual and enterprise) should be fully operational now. The API for audio streams should be mostly operational.
We apologize for the inconvenience and downtime. It was caused by a fire in the OVHCloud's building. The fire destroyed their SBG2 datacenter and a part of SBG1.
We temporarily aren't able to restore the tokens of most users who sign up after March 5 and any data of four of our clients. We store a backup at the SBG5 datacenter, which wasn't affected by fire but is disconnected from the Internet. We hope to restore the data as soon as possible.
We apologize for the inconvenience and downtime. It was caused by a fire in the OVHCloud's building. The fire destroyed their SBG2 datacenter and a part of SBG1.
We temporarily aren't able to restore the tokens of most users who sign up after March 5 and any data of four of our clients. We store a backup at the SBG5 datacenter, which wasn't affected by fire but is disconnected from the Internet. We hope to restore the data as soon as possible.
Twitter
Octave Klaba
Update 5:20pm. Everybody is safe. Fire destroyed SBG2. A part of SBG1 is destroyed. Firefighters are protecting SBG3. no impact SBG4.
AudD
The API for audio files (both usual and enterprise) should be fully operational now. The API for audio streams should be mostly operational. We apologize for the inconvenience and downtime. It was caused by a fire in the OVHCloud's building. The fire destroyed…
Update: SBG5 is not a datacenter but a region and the actual data was physically stored on one of the SBG1-4. We don’t know yet on which one and if the data is restorable
We slightly updated @auddbot. Now it also supports links to video streaming services (and timecodes!)
The bot also accepts video and audio files of any length; the music is recognized from the first 12 seconds of the file.
You can also requests BPM, music key, and loudness or share the recognized song to other chats using the Share button.
The bot also accepts video and audio files of any length; the music is recognized from the first 12 seconds of the file.
You can also requests BPM, music key, and loudness or share the recognized song to other chats using the Share button.
The internet is full of homophobia and transphobia, full of hate towards people for their views, colors, sexual orientations, and gender identities.
For the third year in a row, we're celebrating Pride month (including our Russian VK page, vk.com/audd). We want to remind our subscribers that all people are equal intelligent creatures no matter if they belong to LGBTQ+, and that the hate towards the LGBTQ+ community is absurd and caused by ignorance and biases.
Happy Pride!
For the third year in a row, we're celebrating Pride month (including our Russian VK page, vk.com/audd). We want to remind our subscribers that all people are equal intelligent creatures no matter if they belong to LGBTQ+, and that the hate towards the LGBTQ+ community is absurd and caused by ignorance and biases.
Happy Pride!
👍4
On June 18, we're releasing the first album of the new project of one of our composers. It's truly epic!
Check out the album preview: https://www.youtube.com/watch?v=i7YMbao9W3I
Check out the album preview: https://www.youtube.com/watch?v=i7YMbao9W3I
YouTube
Infinity Inside - Between Dimensions (Album Preview)
We are happy to announce the debut album "Between Dimensions" of the new music project "Infinity Inside".
The album features 12 orchestral cinematic epic tracks. It will be available on all music platforms starting on June 18th.
The album features 12 orchestral cinematic epic tracks. It will be available on all music platforms starting on June 18th.
👍1
We’ve made some changes to our Enterprise endpoint:
- It now accepts URLs of web pages (e.g., YouTube links);
- The chunks’ length is now 12 seconds;
- You can specify the time where the server should start music recognition;
- You can ask the server to parse the t parameter of the url you give it and use it as the start time.
Our Reddit bot already utilizes those features, checkout the source code
- It now accepts URLs of web pages (e.g., YouTube links);
- The chunks’ length is now 12 seconds;
- You can specify the time where the server should start music recognition;
- You can ask the server to parse the t parameter of the url you give it and use it as the start time.
Our Reddit bot already utilizes those features, checkout the source code
docs.audd.io
Enterprise endpoint | AudD Music Recognition API Docs
We have a separate endpoint with enterprise features and the ability to handle large files. You can send hours- or even days-long audio to this endpoint and receive highly detailed metadata in response.
❤1
AudD
On June 18, we're releasing the first album of the new project of one of our composers. It's truly epic! Check out the album preview: https://www.youtube.com/watch?v=i7YMbao9W3I
Between Dimensions, the first album of the new project of one of our composers, is out!
lis.tn/BetweenDimensions
lis.tn/BetweenDimensions
lis.tn
Between Dimensions by Infinity Inside
Preview, download or stream Between Dimensions by Infinity Inside
We donated $25k to the Machine Intelligence Research Institute.
The long-term potential of humanity is unimaginably enormous. We can’t even dream of the number of stars our distant descendants will settle on and how diverse their forms, abilities, and feelings will be.
Scientists researching existential risks estimate the chance of that long-term potential being destroyed by AI as 1/10 within the next 100 years. For comparison, they estimate the probability of an existential catastrophe caused by a naturally-originated pandemic as 1/10000; by a climate change as 1/1000 per century.
AI researchers don’t know how to make an artificial general intelligence such that it won’t destroy humanity; they think the default way the AI will be created would lead to a catastrophe.
No one has come up with a way to solve the problem of aligning preferences of a teachable agent switch human preferences. It is well understood that it’s impossible to achieve just by defining some rules or designing an accurate utility function (just like it’s impossible to come up with a wish a genie would have to fulfill in a way you intended to). The researchers have shown that any statically defined utility function won’t be aligned with human preferences. No one has come up with an algorithm that would create agents doing what we’d want them to do if we were smarter.
Among the ways that can potentially work, there are, e.g., algorithms that create agents that want to satisfy our preferences but aren’t sure what those preferences are, and thus every time they aren’t sure what we would want them to do, they ask. How to create such algorithms, scientists and researchers don’t know yet.
Experts in this field estimate the chances of an existential catastrophe caused by AI a lot higher than 1/10.
The creation of the first AGI will be the event that will determine the history of humanity. The long-term potential of our species can be destroyed - but if we succeed, we’ll get an assistant that will help us solve all of the other problems humanity is facing.
The estimates of when AGI will be created vary among researchers. But if humanity knew that aliens would arrive on Earth in a few decades, it would already begin to prepare. The creation of AGI is an event far more important than an encounter of another intelligent species - and we almost certainly know that we have less than a century remaining. We believe that AI Existential Safety is the most important thing people can work on right now.
Machine Intelligence Research Institute is one of the organizations that work on reducing the AI-related existential risk. They do research to ensure that smarter-than-human AI will have a positive impact.
You can read more on Artificial General Intelligence in “Human Compatible”, a book by Stuart Russel. He is a professor at UC Berkeley and author of the most popular textbook on machine learning used by 1500 universities around the world.
You can read more on existential threats to humanity in “The Precipice”, a book by Toby Ord. He is a researcher at the Oxford University’s Future of Humanity Institute and the founder of Giving What We Can.
The long-term potential of humanity is unimaginably enormous. We can’t even dream of the number of stars our distant descendants will settle on and how diverse their forms, abilities, and feelings will be.
Scientists researching existential risks estimate the chance of that long-term potential being destroyed by AI as 1/10 within the next 100 years. For comparison, they estimate the probability of an existential catastrophe caused by a naturally-originated pandemic as 1/10000; by a climate change as 1/1000 per century.
AI researchers don’t know how to make an artificial general intelligence such that it won’t destroy humanity; they think the default way the AI will be created would lead to a catastrophe.
No one has come up with a way to solve the problem of aligning preferences of a teachable agent switch human preferences. It is well understood that it’s impossible to achieve just by defining some rules or designing an accurate utility function (just like it’s impossible to come up with a wish a genie would have to fulfill in a way you intended to). The researchers have shown that any statically defined utility function won’t be aligned with human preferences. No one has come up with an algorithm that would create agents doing what we’d want them to do if we were smarter.
Among the ways that can potentially work, there are, e.g., algorithms that create agents that want to satisfy our preferences but aren’t sure what those preferences are, and thus every time they aren’t sure what we would want them to do, they ask. How to create such algorithms, scientists and researchers don’t know yet.
Experts in this field estimate the chances of an existential catastrophe caused by AI a lot higher than 1/10.
The creation of the first AGI will be the event that will determine the history of humanity. The long-term potential of our species can be destroyed - but if we succeed, we’ll get an assistant that will help us solve all of the other problems humanity is facing.
The estimates of when AGI will be created vary among researchers. But if humanity knew that aliens would arrive on Earth in a few decades, it would already begin to prepare. The creation of AGI is an event far more important than an encounter of another intelligent species - and we almost certainly know that we have less than a century remaining. We believe that AI Existential Safety is the most important thing people can work on right now.
Machine Intelligence Research Institute is one of the organizations that work on reducing the AI-related existential risk. They do research to ensure that smarter-than-human AI will have a positive impact.
You can read more on Artificial General Intelligence in “Human Compatible”, a book by Stuart Russel. He is a professor at UC Berkeley and author of the most popular textbook on machine learning used by 1500 universities around the world.
You can read more on existential threats to humanity in “The Precipice”, a book by Toby Ord. He is a researcher at the Oxford University’s Future of Humanity Institute and the founder of Giving What We Can.
👍3❤2
AudD
Discord bot There are a lot of Discord bots for playing music. Many are able to play streams. You can tell them to %play https://www.youtube.com/watch?v=ZY4nBPC3-J8 and enjoy cool music. But what if you want to know what’s the song playing? There was no…
The new version of our Discord bot identifies music from voice channels and audio/video files and links posted in text channels.
Video demo: https://youtu.be/HcORAQzwTdM
Try the bot on our Discord server: https://audd.cc/discord
Source code: https://github.com/AudDMusic/DiscordBot
Video demo: https://youtu.be/HcORAQzwTdM
Try the bot on our Discord server: https://audd.cc/discord
Source code: https://github.com/AudDMusic/DiscordBot
YouTube
AudD Discord Bot
A music recognition bot for Discord. Now available on the Discord App Directory.
Join our server and invite the bot from there: https://audd.cc/discord
Source code: https://github.com/AudDMusic/DiscordBot
Utilizes the Music Recognition API: https://audd.io/
Join our server and invite the bot from there: https://audd.cc/discord
Source code: https://github.com/AudDMusic/DiscordBot
Utilizes the Music Recognition API: https://audd.io/
❤5🤯2👍1
Our extension got featured on the Chrome Web Store ☺️
audd.cc/chrome
The source code is available on GitHub
audd.cc/chrome
The source code is available on GitHub
👍39❤7🔥4
we've made some things faster! ⚡️
• reduced the latency of our music recognition for audio streams
• the enterprise endpoint now responds faster
• before, when you logged into our API dashboard, you'd see a message
Our database keeps expanding; integrating the API keeps getting easier with all the AI tools; the first step remains as easy as running a single curl command:
Get an API token by signing up on the Dashboard, if you haven't yet!
If you've stopped using the API for any reason in the past, get in touch and we'll restart your trial!
If we lack any feature that you really want, let us know!
• reduced the latency of our music recognition for audio streams
• the enterprise endpoint now responds faster
• before, when you logged into our API dashboard, you'd see a message
For >99% of requests, our API response time is lower than the time you spend reading this text. now, the redirects are so fast you won't even notice them.Our database keeps expanding; integrating the API keeps getting easier with all the AI tools; the first step remains as easy as running a single curl command:
curl https://api.audd.io/ \
-F api_token='test' \
-F url='https://audd.tech/example.mp3' \
-F return='apple_music,spotify'
Get an API token by signing up on the Dashboard, if you haven't yet!
If you've stopped using the API for any reason in the past, get in touch and we'll restart your trial!
If we lack any feature that you really want, let us know!
Please open Telegram to view this post
VIEW IN TELEGRAM
docs.audd.io
Audio streams | AudD Music Recognition API Docs
You can use the AudD real-time music recognition service for audio streams to identify the songs that are being played on radio stations (or any other streams)
❤6👍1🔥1👏1
Our friends at the Machine Intelligence Research Institute — a nonprofit we've donated >$60k to — just released a book. It's called "If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All".
The world is in a peculiar state.
Geoffrey Hinton receives a Nobel prize for his foundational work in AI, but says he (partly) regrets his life’s work and thinks there’s >50% chance that AI will literally kill everyone on the planet.
Countless scientists sign the statement that mitigating the risk of extinction from AI should be a global priority.
This book is an excellent explanation why.
It argues that building something that is superhumanly good at achieving goals, in a way that doesn’t cause a catastrophe, is hard. Especially so given the way modern AI is built: no one writes the code it’s made of; instead of code, modern AI is hundreds of billions to trillions of numbers we don’t understand and automatically “grow” until it becomes smart.
If you have a background in machine learning, you can read the additional online resources the book links to, and read the papers to understand the technical details: that when a goal-oriented AI is very smart and has long-term goals, it tries to maximize reward signal during training for instrumental reasons, to prevent gradient descent (the process by which the numbers inside artificial neural networks are grown) from changing its goals to comply, and this means that gradient descent looks for capable agentic systems with some long-term goals, without being able to distinguish between the goals we like and the goals we don’t like. AI’s internals are effectively black boxes. (And we already empirically see this starting to happen: see the alignment faking paper.)
It's very unfortunate that the title is not a metaphor and not an exaggeration; it's the best available understanding of the state of the field of AI.
The book has endorsements from the leading scientists and public figures: Yoshua Bengio (a "godfather of AI" and the most-cited living scientist); Emmet Shear (ex interim CEO of OpenAI); Vitalik Buterin, Stephen Fry, Ben Bernanke (Nobel laureate in economics and ex-Fed chair), Grimes, and *a lot* of professors.
We also endorse the book and would really like to recommend it to everyone.
We urge you to read it.
You can find it in any bookstore or on Amazon.
(If you've been a client of AudD for a long time, we would love to buy you a copy; get in touch.)
The world is in a peculiar state.
Geoffrey Hinton receives a Nobel prize for his foundational work in AI, but says he (partly) regrets his life’s work and thinks there’s >50% chance that AI will literally kill everyone on the planet.
Countless scientists sign the statement that mitigating the risk of extinction from AI should be a global priority.
This book is an excellent explanation why.
It argues that building something that is superhumanly good at achieving goals, in a way that doesn’t cause a catastrophe, is hard. Especially so given the way modern AI is built: no one writes the code it’s made of; instead of code, modern AI is hundreds of billions to trillions of numbers we don’t understand and automatically “grow” until it becomes smart.
If you have a background in machine learning, you can read the additional online resources the book links to, and read the papers to understand the technical details: that when a goal-oriented AI is very smart and has long-term goals, it tries to maximize reward signal during training for instrumental reasons, to prevent gradient descent (the process by which the numbers inside artificial neural networks are grown) from changing its goals to comply, and this means that gradient descent looks for capable agentic systems with some long-term goals, without being able to distinguish between the goals we like and the goals we don’t like. AI’s internals are effectively black boxes. (And we already empirically see this starting to happen: see the alignment faking paper.)
It's very unfortunate that the title is not a metaphor and not an exaggeration; it's the best available understanding of the state of the field of AI.
The book has endorsements from the leading scientists and public figures: Yoshua Bengio (a "godfather of AI" and the most-cited living scientist); Emmet Shear (ex interim CEO of OpenAI); Vitalik Buterin, Stephen Fry, Ben Bernanke (Nobel laureate in economics and ex-Fed chair), Grimes, and *a lot* of professors.
We also endorse the book and would really like to recommend it to everyone.
We urge you to read it.
You can find it in any bookstore or on Amazon.
(If you've been a client of AudD for a long time, we would love to buy you a copy; get in touch.)
👍3🔥2🥰2🤯1