Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Businesses

Baidu Shares Fall After Ernie AI Chatbot Demo Disappoints (arstechnica.com) 32

Shares of Baidu fell as much as 10 percent on Thursday after the web search company showed only a pre-recorded video of its AI chatbot Ernie in the first public release of China's answer to ChatGPT. From a report: The Beijing-based tech company has claimed Ernie will remake its business and for weeks talked up plans to incorporate generative artificial intelligence into its search engine and other products. But on Thursday, millions of people tuning in to the event were left with little idea of whether Baidu's chatbot could compete with ChatGPT.

During the highly publicized and eagerly anticipated news conference for Ernie, Baidu founder Robin Li stood beside an open chat screen, narrating questions that had earlier been typed into the chatbot. He admitted the company was only showing a demo of the technology that it had prepared earlier. Li said some users would soon be able to test out Ernie on their own but did not provide a timeline for a full public rollout. The company is starting with a limited public release to business partners. Ernie's planned launch comes as US groups such as OpenAI and Google continue making strides in pushing forward their development of generative AI. OpenAI this week released GPT-4, its latest AI model that it claims can beat some humans on tough professional tests such as the US bar exam.

This discussion has been archived. No new comments can be posted.

Baidu Shares Fall After Ernie AI Chatbot Demo Disappoints

Comments Filter:
  • by rsilvergun ( 571051 ) on Thursday March 16, 2023 @10:28AM (#63375723)
    everyone knows Bert's the smart one.
  • by Ol Olsoc ( 1175323 ) on Thursday March 16, 2023 @10:38AM (#63375755)
    Wasn't "Ernie" the name of a service that pranked spam phone callers by playing recordings of a senile old man that made responses to the calls that sounded pretty convincing for a while. I listened to some examples of them and they were hilarious.
    • Re: (Score:3, Funny)

      by Tablizer ( 95088 )

      No, that was the Presidential debates.

    • Re:Duplication? (Score:5, Informative)

      by pz ( 113803 ) on Thursday March 16, 2023 @11:23AM (#63375905) Journal

      Wasn't "Ernie" the name of a service that pranked spam phone callers by playing recordings of a senile old man that made responses to the calls that sounded pretty convincing for a while. I listened to some examples of them and they were hilarious.

      That was Lenny, an absolutely brilliant bit of engineering.

      https://www.lennytroll.com/ [lennytroll.com]

      • Wasn't "Ernie" the name of a service that pranked spam phone callers by playing recordings of a senile old man that made responses to the calls that sounded pretty convincing for a while. I listened to some examples of them and they were hilarious.

        That was Lenny, an absolutely brilliant bit of engineering.

        https://www.lennytroll.com/ [lennytroll.com]

        Ahh - thanks much for letting me know - listening to Lenny right now.

      • Wasn't "Ernie" the name of a service that pranked spam phone callers by playing recordings of a senile old man that made responses to the calls that sounded pretty convincing for a while. I listened to some examples of them and they were hilarious.

        That was Lenny, an absolutely brilliant bit of engineering.

        https://www.lennytroll.com/ [lennytroll.com]

        Damn! That is exactly like an answering-machine idea I had 40 fucking years ago! Mine wasn't going to be so snarky, though. This is great!

  • by turp182 ( 1020263 ) on Thursday March 16, 2023 @10:38AM (#63375757) Journal

    All of the major tech players, internationally, are vying to dominate in the AI sector, even though they themselves probably don't understand exactly what that means.

    Enough so that stock prices can swing wildly for contenders who weren't first (Open AI can screw up, they were first out of the gates, Google's stock got smacked by a bad demo and here's another example).

    It's interesting to watch and ponder.

    Myself, selfishly..., I'd like to see the OpenAI Copilot model be available for private use (self-hosted, probably in Azure) with 10k+ token input so that extensive system analysis could be performed for quality and modernization efforts. That would be GOLD. I don't need it to code, just answer my deep technical questions about existing code (I can take it from there...).

    Sort of like the Cast structure/system analysis on drugs (or AI which would be accurate), https://www.castsoftware.com/ [castsoftware.com]

    • by gweihir ( 88907 )

      This is the typical situation when there is lots of hype and very little substance. Which is exactly what we have.

      • by turp182 ( 1020263 ) on Thursday March 16, 2023 @11:18AM (#63375887) Journal

        Open AI/Github Copilot isn't hype. It adds considerably to my productivity, but it is a specific case tool powered by a large-language-model.

        I've learned and still am learning how to write the best comments/prompts to get it to best do its thing. And the general "this is what you probably want to type next" capabilities can be pretty freaky (and time saving in general).

        (I ignore potential copyright/code misuse issues, those will get settled over time and aren't a concern of mine at this time, it really helps finding capabilities or methods that I'm not familiar with, say in Unity for example - 3d vector operations).

        • by gweihir ( 88907 )

          So all you have is a very preliminary evaluation for your own specific use-case and that is enough to claim "not hype"? Looks to me like you do exceptionally shoddy work. For that use-case it may well be "not hype".

          • I can tell you haven't used it (not tried it, but USED). I've been paying for it for about half a year.

            It trains to the user's style (in a way that would be useless to others). The more it's running the more it knows what I do and it's ability to anticipate can be breathtaking.

            It gives me more time to consider design, architecture, and research by speeding up the implementation side.

            Regarding my work, I don't claim to know anything, but demand for my time greatly exceeds the time I allow (the Unity commen

            • by gweihir ( 88907 )

              Well, I actually do not have a use-case for it. I also have followed AI research for about 35 years now and have a pretty good idea what this thing can and cannot do. I am typically paid to think and looking up stuff that is already out there does not cut it.

              Now, I am not saying this thing is useless. I am saying a lot of what people hope or fear this thing can do is hype and it cannot actually do these things or at least not competently or reliably. Now, if you are not a "google coder" and actually are cap

    • All of the major tech players, internationally, are vying to dominate in the AI sector, even though they themselves probably don't understand exactly what that means.

      Nobody knows what it means, but everyone can see that it's transformative, so everyone smart is trying to keep a hand in.

      I'd like to see the OpenAI Copilot model be available for private use (self-hosted, probably in Azure) with 10k+ token input so that extensive system analysis could be performed for quality and modernization efforts.

      More open-source language models seem to be coming along now, though they aren't yet as accessible as the image generators. I'd argue that's a better place to spend effort.

  • Of course if some small wannabe emperor wants things done _now_, a project that takes at the very least half a decade will still take half a decade. All you can do is fake something and they did. At the moment, Artificial Ignorance is one of the few fields where China is still behind the US. My prediction is that will have been rectified in about 10 years or so.

    • There's no reason to believe that China is going to catch up to the rest of the world on this kind of tech for the foreseeable future. Even with all the advantages (access to everyone else's IP, the world's strongest manufacturing base, centralized control of R&D and society in general, etc etc) China has lagged solidly behind in most areas. Some love to bring up China's 5G tech, but they hold only about 15% of the critical patents there, and that's the tech they're doing best in.

      I hasten to add that th

      • by gweihir ( 88907 )

        Yeah, keep telling yourself that. Better not read this article here then, it may destroy your warm, fuzzy feeling of being safe:
        https://www.theguardian.com/wo... [theguardian.com]

        Also note that this is research was funded by the US State Department and hence likely paints a too positive picture....

        • Also note that this is research was funded by the US State Department and hence likely paints a too positive picture....

          So about that, I haven't read the report spoken of in the article you linked (it wasn't linked from there, sadly) but I did take a quick glance over the ol' WP [wikipedia.org] regarding the origin. Now, I don't want to go all Ad Hominem here without actually reading the report, but there's enough to be suspicious about that I'd want to do that before I accepted a summary of it.

    • by Midnight_Falcon ( 2432802 ) on Thursday March 16, 2023 @11:27AM (#63375919)

      Of course if some small wannabe emperor wants things done _now_, a project that takes at the very least half a decade will still take half a decade. All you can do is fake something and they did. At the moment, Artificial Ignorance is one of the few fields where China is still behind the US. My prediction is that will have been rectified in about 10 years or so.

      I think that's extremely wishful thinking for China's rise. The problem is that development of novel technologies like AI requires a more open society and corpus of works to analyze. When your AI needs to be trained in a 1984-esque world where the government controls access to information, education, and all other aspects of society, it will have deficits. When it is programmed by people who haven't had the benefit of open access to information and thought, it will be sorely lacking in creativity.

      One of Silicon Valley's not-so-secret secrets is that cannabis and psychedelics have inspired tons of solutions. The influence is felt deeply in the Apple computer family and OS X (even its original design, and Steve Jobs himself!), Cisco routers and switches (top engineers used to trip and go to Phish shows to come up with solutions that worked..), etc etc. In China such behavior is steadfastly frowned upon and the government would not allow such drug users to get nice jobs at tech companies, nonetheless even consider allowing them to get others to try psychedelics themselves. They'd rather hack foreign companies and steal code and try to reverse engineer it, than empower their own people to be able to make it themselves. This is because with that empowerment also comes access to information that might challenge the regime/government/public policy, e.g. Taiwain is a real place and Hong Kong doesn't want to be governed by you.

      That's just one small aspect of how China's closed society, harsh rules and totalitarian control stifles innovation and exploration. The result is crappy copy-cat products like this OpenAI imitation.

      • by gweihir ( 88907 )

        I have absolutely no wishful thinking as to China raising. I can see reality though.

        • The reality I see is attempts at copycattery achieving various levels of success, including abject failure like we see with this AI product. They need the rest of the world to innovate first because they lack the ability to do it themselves.
          • by gweihir ( 88907 )

            That is the old China. They are catching up in various fields. Underestimating an enemy like China is exceptionally dangerous.

      • One of Silicon Valley's not-so-secret secrets is that cannabis and psychedelics have inspired tons of solutions. The influence is felt deeply in the Apple computer family and OS X (even its original design, and Steve Jobs himself!),

        Back in the early days, when Jobs actually interviewed employees, he famously asked Applicants if they had taken LSD. And you'd better answer "Yes!".

  • I know nothing about Chinese, but I can't help but wonder if the tremendous difference between it and Germanic languages could result in AI-based chat performing differently (worse?) in Chinese than it does, say, English? After all, basically what these chatbots do is put one word after another based on statistical weights, so certainly the granularity of a language (are the basic components phonetic, or "words", or do glyphs convey even larger sets of meaning, etc) must have serious implications on how we

  • So they only gonna feed it with censored CCP propaganda data, so even if it operated right it's only aware of 1/10 the information on the internet. Yo Ernie, write me a poem about democracy.
  • AI can't be copied, it needs a ton of training data and model changes.

The truth of a proposition has nothing to do with its credibility. And vice versa.

Working...