Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Technology

2D To 3D AI Startup Was Actually Humans Doing the Work Manually (404media.co) 33

Slash_Account_Dot writes: An artificial intelligence company, whose founder Forbes included in a 30 Under 30 list recently, promises to use machine learning to convert clients' 2D illustrations into 3D models. In reality the company, called Kaedim, uses human artists for 'quality control.' According to two sources with knowledge of the process interviewed by 404 Media, at one point, Kaedim often used human artists to make the models. One of the sources said workers at one point produced the 3D design wholecloth themselves without the help of machine learning at all.

The news pulls back the curtain on a hyped startup and is an example of how AI companies can sometimes overstate the capabilities of their technology. Like other AI startups, Kaedim wants to use AI to do tedious labor that is currently being done by humans. In this case, 3D modeling, a time consuming job that video game companies are already outsourcing to studios in countries like China.

This discussion has been archived. No new comments can be posted.

2D To 3D AI Startup Was Actually Humans Doing the Work Manually

Comments Filter:
  • by Fons_de_spons ( 1311177 ) on Wednesday September 06, 2023 @12:39PM (#63828102)
    They were making training data?
  • by istartedi ( 132515 ) on Wednesday September 06, 2023 @12:40PM (#63828108) Journal

    What's old [wikipedia.org] is new again.

    • I was thinking Theranos myself. Hype something up that isn't even close to working, then substitute results from somewhere else all while floundering to get the real thing working. Just claim its "quality control" or your "working out the bugs".
      • I'm pretty sure the weird Theranos woman was one of Forbes' "30 under 30" as well. Collectively Forbes' "30 under 30" have been sentenced to way more than 30 years.
  • by narcc ( 412956 ) on Wednesday September 06, 2023 @12:42PM (#63828112) Journal

    An AI company overstating their claims? Say it isn't so!

    Really, every AI company dramatically over-states their claims, but an outright mechanical Turk really takes the cake.

    I'm a little curious as to what their end-goal was. (I did not RTFA, in accordance with local custom.) Were they hoping to get enough training data to actually do the thing they said they could do? Were they hoping to quickly sell to the least diligent buyer in history?

    • by Anonymous Coward

      They were probably hoping for billions to come pouring out of investor wallets in some ridiculous IPO scam 6 months from now after the Hype and Clickbait marketing team sold the market a convincing story to feed a stock ticker for just long enough to cash out the stock options.

      The lies may be feeding a lot more lemmings these days, but Greed's motivation hasn't changed. Neither has a corrupt market.

    • by sodul ( 833177 )

      When I was working in the self driving cars industry, this is how our cars could be 'driverless': someone would make key decisions for the care several times per mile from a desk located a few miles from the car ... and this was on a relatively short loop.

      When I see such a car on the road I still stay away from it, if not to avoid potential crashes but to avoid potential eye damage:
      https://cleantechnica.com/2021... [cleantechnica.com]

  • by Anonymous Coward

    "An artificial intelligence company, whose founder Forbes included in a 30 Under 30 list recently..."

    Elizabeth Holmes was 19 years old when she founded Theranos.

    SBF ran a rather infamous Bahamian operation consisting of mainly twentysomethings.

    The Forbes Under 30 list is starting to sound like America trying to convince America there's real value from a teenage voter.

  • AI-controlled organic wetware is the next big thing

  • At this point I respect the grifters more than their chumps.

    • At this point I respect the grifters more than their chumps.

      Well, except for the fact that many of the most gullible chumps are decision-makers who are laying off people and attempting to replace them with the aforementioned AI. Even if those people eventually get hired back (or hired elsewhere), it's those people whose lives are being disrupted.

      It's not like AI is replacing the managers...

  • It was 2001 (Score:4, Funny)

    by gillbates ( 106458 ) on Wednesday September 06, 2023 @01:21PM (#63828184) Homepage Journal

    It was 2001, and as part of my undergrad education, listened to a presentation by a University of Illinois professor detailing how he (or rather his grad students) had automated the process of creating 3D models from pairs of 2D images.

    3D scene reconstruction from 2D images is already a solved problem in computer science. If you're hiring actual artists to do this for you, may I suggest hiring a CS grad instead?

    • Re:It was 2001 (Score:5, Informative)

      by quantaman ( 517394 ) on Wednesday September 06, 2023 @02:15PM (#63828280)

      It was 2001, and as part of my undergrad education, listened to a presentation by a University of Illinois professor detailing how he (or rather his grad students) had automated the process of creating 3D models from pairs of 2D images.

      3D scene reconstruction from 2D images is already a solved problem in computer science. If you're hiring actual artists to do this for you, may I suggest hiring a CS grad instead?

      From multiple 2D images that are nice, clean, and extremely consistent, or from single 2D images with significant clues (like shading). But especially when dealing with slight inconsistencies it turns into a nasty problem fairly quickly. Just look at 3d scanner apps [makeuseof.com], they work decently given a few dozen clear photos of a simple image. But try to scan something slightly more complicated like someone trying to hold their hand still and you get gibberish. You definitely need AI to make a 3D image based a handful of illustrations.

      From the docs for this start up actually smells legit. For instance they want multiple images, but don't require it [kaedim3d.com], which is what I'd expect for a modern era 2D -> 3D transformer model.

      And FTA:
      “Surely to quality control you actually need something to judge the quality of,” they said, before adding that some workers only saw the initial 2D image a client had submitted and not an output generated by the AI.

      Note the "some workers" bit. In other words a lot of workers are seeing the first-pass AI generated model.

      So this start up is doing pretty much exactly what I'd expect. Use some initial AI to do the 2D -> 3D when they can, and then have artists fill in the gaps, which at this point are massive.

      The current page [kaedim3d.com] seems transparent about this:
      "Kaedim's machine learning and in-house art team combine to deliver production-quality assets in minutes."

      But a version from July 25th [archive.org] makes no mention (and it has the "magic" quote mentioned in the article). So I suspect they were hiding the human assistant and updated the web page when they got caught.

      • I think there's a substantial difference between the amateur-level, "works on any phone" type of application which chokes on complicated scenery, and a professional tool with calibrated cameras, known perspective, known poses, etc... Finding the corner cases for software algorithms, especially 3D problems (which tend to be asymptotic), is not difficult, but once someone understands how and why the software behaves as it does, success becomes a matter of experiential knowledge.

        Because this was aimed at p

        • I think there's a substantial difference between the amateur-level, "works on any phone" type of application which chokes on complicated scenery, and a professional tool with calibrated cameras, known perspective, known poses, etc... Finding the corner cases for software algorithms, especially 3D problems (which tend to be asymptotic), is not difficult, but once someone understands how and why the software behaves as it does, success becomes a matter of experiential knowledge.

          The scenery when you're dealing with a phone app is going to be tricky to get rid of (not knowing the linear algebra behind it myself) but otherwise not having calibrated cameras, known perspective, known poses, etc... is a problem made worse, not better, working with illustrations.

          Because this was aimed at professionals, it is reasonable that they could expect their users to undergo some training to develop their proficiency. A professional artist might reasonably expect to paint a few studies of a subject - perhaps a few hours to a dozen or so hours - before incorporating the subject into a greater work. And this is just for a single painting. The principles of getting good results from a 2D-3D transform are not difficult to understand, and it is often much easier to train the person (i.e., palm pilot and handwriting recognition) than get a computer algorithm to work with sub-optimal positioning. When you understand the principles of how the computer will attempt to reconstruct a 3D model from your images, it becomes easier to pose or position the camera in such a way that the algorithm has the easiest time reconstructing the set of 3D points making up the image.

          An algorithmic 2D-3D transform is going to want the exact same 3D model from different camera angles.

          An illustrator doing a study is going to want the character doing different 3D poses from different camera angles.

          Those are very

  • We cant be having too much Human in our Artificial Intelligence.... false advertising.

  • by Chelloveck ( 14643 ) on Wednesday September 06, 2023 @01:52PM (#63828240)

    I propose a new update to Betteridge's Law: Any headline in popular media talking about AI can be answered with "no". No, it doesn't work that way. No, that's a dumb idea. No, AI isn't capable of that. No, you have no idea what AI is.

    I'm sure it could be generalized as "Any headline talking about the latest tech buzzword...". Blockchain is another excellent example.

    Furthermore it can be assumed that the reporter completely misunderstood what was actually being said, or the person saying it is running a scam. Or both.

    • I remember when "database" was the new buzzword. All of a sudden, any collection of files was now a "database", whether there was an RDBMS using them or not.

      • That's the way I still use the word. Any collection of data is a database, regardless of how it's accessed. It could be because I'm (usually) not directly involved with the storage of the data. I access it via abstract methods that could as easily apply to an SQL DB, a key/value store, or even just a plain text doc and grep. I say "get data matching these parameters" and it appears. I don't care about the implementation.

  • The summary and article intentionally use language to imply scandalous "Theranos" style fraud by begging the reader to make inductive assumptions.

    • It's often easy to get today's AI to do, say, 90% or 95% of a task like this, but incredibly difficult to get it to do 100% without noticeable errors.

      It would make perfect sense from a business and technical point of view to have human QA and touch-up/repair of 3D model faults.

      I haven't seen the original marketing material from the company though. Maybe they omitted this important detail of how they do the 2D to 3D.

      If so, slap on wrist for misleading needed. But it does not mean the whole AI application is
  • OK, shady as hell if they were telling people they were getting AI generated results when they were human generated... but beyond that... what is the news here?

    Using humans as quality control is only logical. AI company or not. No company will spend money without a reason.

    It's not my fault you're stupid. That you assumed he was the only person working there and computers were the next magic. Maybe try understanding what you're claiming to write about before your clickbait stupidity. Man I gotta get of

  • To build a good AI you need training data. Having lots and lots of manually curated training pairs is a great asset. What's fishy is the claim about their software, but as a path to a good AI it may pay off in the end. Worst case they pivot to Uber for 3D artists, but across borders, so labor is way cheaper.
  • by Jhon ( 241832 )

    Artificial artificial intelligence?

    It's like Victor/Victoria -- only with computers.

  • “The idea for Kaedim was born from a personal frustration when, 2 years ago, I was working on a project for re-creating a cathedral in 3D software for my university degree. Before being hands-on, the concept seemed straightforward to me, ‘the same way you draw on a piece of paper, you can also draw in 3D, how hard can it be?’” It can be hard. I don't know what would be more difficult to model than a cathedral.
  • by Walt Dismal ( 534799 ) on Wednesday September 06, 2023 @02:50PM (#63828358)

    I have a neat little startup to test AI use in creatures. It may have some bugs at the beginning but after a few Agile sprints I'll probably have those fixed.

    -- God

  • how AI companies can sometimes overstate the capabilities of their technology...

  • Have you noticed lately that popular social media sites are making you jump through more and more hoops? They're not trying to test if you're human, they're trying to make you work for them for free and then calling your output "AI". The whole sector is massively fraudulent.

Technology is dominated by those who manage what they do not understand.

Working...