


Soundslice Adds ASCII Tab Support After ChatGPT Hallucinates Feature 11
After discovering that ChatGPT was falsely telling users that Soundslice could convert ASCII tablature into playable music, founder Adrian Holovaty decided to actually build the feature -- even though the app was never designed to support that format. TechCrunch reports: Soundslice is an app for teaching music, used by students and teachers. It's known for its video player synchronized to the music notations that guide users on how the notes should be played. It also offers a feature called "sheet music scanner" that allows users to upload an image of paper sheet music and, using AI, will automatically turn that into an interactive sheet, complete with notations. [Adrian Holovaty, founder of music-teaching platform Soundslice] carefully watches this feature's error logs to see what problems occur, where to add improvements, he said. That's where he started seeing the uploaded ChatGPT sessions.
They were creating a bunch of error logs. Instead of images of sheet music, these were images of words and a box of symbols known as ASCII tablature. That's a basic text-based system used for guitar notations that uses a regular keyboard. (There's no treble key, for instance, on your standard QWERTY keyboard.) The volume of these ChatGPT session images was not so onerous that it was costing his company money to store them and crushing his app's bandwidth, Holovaty said. He was baffled, he wrote in a blog post about the situation.
"Our scanning system wasn't intended to support this style of notation. Why, then, were we being bombarded with so many ASCII tab ChatGPT screenshots? I was mystified for weeks -- until I messed around with ChatGPT myself." That's how he saw ChatGPT telling people they could hear this music by opening a Soundslice account and uploading the image of the chat session. Only, they couldn't. Uploading those images wouldn't translate the ASCII tab into audio notes. He was struck with a new problem. "The main cost was reputational: New Soundslice users were going in with a false expectation. They'd been confidently told we would do something that we don't actually do," he described to TechCrunch.
He and his team discussed their options: Slap disclaimers all over the site about it -- "No, we can't turn a ChatGPT session into hearable music" -- or build that feature into the scanner, even though he had never before considered supporting that offbeat musical notation system. He opted to build the feature. "My feelings on this are conflicted. I'm happy to add a tool that helps people. But I feel like our hand was forced in a weird way. Should we really be developing features in response to misinformation?" he wrote.
They were creating a bunch of error logs. Instead of images of sheet music, these were images of words and a box of symbols known as ASCII tablature. That's a basic text-based system used for guitar notations that uses a regular keyboard. (There's no treble key, for instance, on your standard QWERTY keyboard.) The volume of these ChatGPT session images was not so onerous that it was costing his company money to store them and crushing his app's bandwidth, Holovaty said. He was baffled, he wrote in a blog post about the situation.
"Our scanning system wasn't intended to support this style of notation. Why, then, were we being bombarded with so many ASCII tab ChatGPT screenshots? I was mystified for weeks -- until I messed around with ChatGPT myself." That's how he saw ChatGPT telling people they could hear this music by opening a Soundslice account and uploading the image of the chat session. Only, they couldn't. Uploading those images wouldn't translate the ASCII tab into audio notes. He was struck with a new problem. "The main cost was reputational: New Soundslice users were going in with a false expectation. They'd been confidently told we would do something that we don't actually do," he described to TechCrunch.
He and his team discussed their options: Slap disclaimers all over the site about it -- "No, we can't turn a ChatGPT session into hearable music" -- or build that feature into the scanner, even though he had never before considered supporting that offbeat musical notation system. He opted to build the feature. "My feelings on this are conflicted. I'm happy to add a tool that helps people. But I feel like our hand was forced in a weird way. Should we really be developing features in response to misinformation?" he wrote.
CEO gets free marketing and complains about it (Score:2, Informative)
News at 11
Re: (Score:2)
It wasn't free marketing, either the company suffer reputational damage for not supporting something a stupid AI said they could which will cost them money in lost customers or they develop a new future which will cost them money. Your definition of definition is very strange.
Re: (Score:2)
Your definition of definition is very strange.
And that'll teach me to proofread, it should read Your definition of free is very strange.
Just a taste (Score:2)
of how AI is taking over. Not in any kind of superintelligence way, but in permeating our online cultural conversation, to the point where the line is blurred between hallucinations and reality.
Orwellian result (Score:2)
ChatGpt says this works, so we better obey our AI mastersand make this work.
Hm, maybe I can convince ChatGpt to tell everyone that short, fat guys with bad knees are attractive.
Ignore this. (Score:1)
Just posting to undo accidental moderation.
So the extreme hallucinations are still not fixed? (Score:3)
While hallucinations cannot really be fixed in LLMs, I had thought they would at least have gotten the more extreme ones under control. Apparently that cannot be done either. Such an impressive "knowledge" tool!
Re: (Score:2)
LLMs literally "hallucinate" every answer, it's just that those hallucinations coincide with reality a surprisingly high percentage of the time.
If your photos app is able to "erase" people from a picture, it has no idea what's behind that person. It hallucinates what might be behind that person, and gets it right a surprisingly high percentage of the time. But the pixels it generates do NOT represent what was actually behind that person, they are completely and fully generated from the "imagination" of the
I like this guy's creativity (Score:2)
If you have an ice cream shop, and people are talking up a flavor you don't actually have, you'd be smart to add it! It's free publicity.
If you're an app developer, and AI is attributing things to your software that it doesn't have, well why not add it! It just might draw more people to your app.