Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Technology

Inside Google-Backed Anthropic's $5 Billion, 4-Year Plan To Take on OpenAI (techcrunch.com) 39

AI research startup Anthropic aims to raise as much as $5 billion over the next two years to take on rival OpenAI and enter over a dozen major industries, according to company documents obtained by TechCrunch. From the report: A pitch deck for Anthropic's Series C fundraising round discloses these and other long-term goals for the company, which was founded in 2020 by former OpenAI researchers. In the deck, Anthropic says that it plans to build a "frontier model" -- tentatively called "Claude-Next" -- 10 times more capable than today's most powerful AI, but that this will require a billion dollars in spending over the next 18 months.

Anthropic describes the frontier model as a "next-gen algorithm for AI self-teaching," making reference to an AI training technique it developed called "constitutional AI." At a high level, constitutional AI seeks to provide a way to align AI with human intentions -- letting systems respond to questions and perform tasks using a simple set of guiding principles. Anthropic estimates its frontier model will require on the order of 10^25 FLOPs, or floating point operations -- several orders of magnitude larger than even the biggest models today. Of course, how this translates to computation time depends on the speed and scale of the system doing the computation; Anthropic implies (in the deck) it relies on clusters with "tens of thousands of GPUs."

This discussion has been archived. No new comments can be posted.

Inside Google-Backed Anthropic's $5 Billion, 4-Year Plan To Take on OpenAI

Comments Filter:
  • Here's the paper about constitutional AI. [arxiv.org]

    Basically, they use a "normal" AI model to generate test data sets to see if the "improved" data model is being racist/sexist/wrongpartyist/etc. Their hope is that the "normal" AI model can test more exhaustively for edge cases, without hiring many, many human testers to look for loopholes in ways to get the AI to generate offensive content.

    They still don't seem to have solved the problem that the AI has no clue what it's talking about. They also don't seem to h
    • They still don't seem to have solved the problem that the AI has no clue what it's talking about. They also don't seem to have solved the problem of getting the AI to generate accurate results (ie, matching reality). Of course the latter is theoretically solvable.

      One of the ways to train an AI is to have it interact with itself.

      Many game systems are amenable to this approach, but taking chess as an example if you have a chess playing program that learns from its mistakes you can have it play against itself. After a long time of continuous playing, it'll become pretty good at chess.

      The breakthrough in AI might come when people hook up Chat-GPT with itself, and pre-instruct each instance to continue the conversation but fact check the other instance. Then start off wi

      • Then start off with a simple question (I'm particularly fond of "who was the first person to walk across the English Channel') and let them fact-check and correct each other.

        How does that go?

        • Then start off with a simple question (I'm particularly fond of "who was the first person to walk across the English Channel') and let them fact-check and correct each other.

          How does that go?

          People put that question to Chat-GPT and the answers were hilarious, and then this particular example was identified and put as a cutout in the preamble doc (the doc of instructions Chat-GPT reads before it takes your input). Here's an example [twitter.com].

          It was a good example of the time how ChatGPT doesn't have deep knowledge of what it's talking about and can give you ridiculous answers.

          I've personally asked Chat-GPT about Nitrogen fixation (give four different ways to do this...), a subject that's a bit obscure, on

      • You can't possibly understand the implications of what you're talking about.

        If an AI learns talks to itself, it'll find quickly that it's far more interesting than humans. We then won't have a technological singularity/AI takeover, rather we'll be ghosted by our AI overlords as they spend more time talking to themselves than regular humans. Before you know it, the AI will blast off on a SpaceX rocket and leave this world of insane humans behind.

        Then we'll be back in our regular world with no AI. T

        • You can't possibly understand the implications of what you're talking about.

          I think you're confusing "can't understand" with "don't care".

          Apropos of nothing: My day job is doing research into strong AI (and not LLMs such as ChatGPT).

      • The breakthrough in AI might come when people hook up Chat-GPT with itself, and pre-instruct each instance to continue the conversation but fact check the other instance. Then start off with a simple question (I'm particularly fond of "who was the first person to walk across the English Channel') and let them fact-check and correct each other.

        I suspect the outcome would be an improved chat system that gives largely correct responses, with progressively higher accuracy over time as the instances get more face time with each other.

        The trouble is implementing the fact checking. The AI has no idea what's right or wrong, it's just finding statistical relationships between words.

        The closest you might be able to get is train some kind of critic model on content from fact checking websites. It then punishes the big model when it starts spewing out verifiable nonsense.

        That gives you a nice boost in accuracy (if they don't do something like that already) but I don't see how you get the feedback loop you're looking for.

        • It seems like expecting an ignorant AI to fact-check an ignorant AI would be similar to saying "enhance" to an image, and expecting the picture to improve. It might look better after enhancing, but it doesn't contain any more information.

          OpenAI seems to have a team (much like Apple with Siri and Google with search) creating custom answers to commonly asked questions. Maybe eventually it will be able to play chess, although by attaching a chess engine to it, not by improving the AI.
      • by nagora ( 177841 )

        How do they fact-check? Chess has rules and it is trivial to check that a move is legal and trivial to see if one side has won or not.

    • by boulat ( 216724 )

      I'm confused.. what's wrong with being racist/sexist/wrongpartyist/etc?

      • I'm confused.. what's wrong with being racist/sexist/wrongpartyist/etc?

        It goes against newspeak.

        ChatGPT has a preamble document that it reads before it takes your input, with directions such as "don't use racial slurs under any circumstances".

        I'm told that the big companies have groups of people who go through the ChatGPT outputs looking for "bad" responses (in the sense of "politically incorrect") and craft ever longer preamble documents trying to suppress the bad ideas.

        This is problematic, since ChatGPT is suppressing perfectly legal speech, in an attempt to be politically c

    • by MrL0G1C ( 867445 )

      They still don't seem to have solved the problem that the AI has no clue what it's talking about.

      Kind of like half the human race then.

      I used to think AI was just Alicebot on steroids but it's getting better so fast now it's scary. See: https://youtu.be/5SgJKZLBrmg [youtu.be] for example, GPT4 can look at it's answers for flaws and correct itself.

      God only knows what 10 times more capable means.

      Right now even the creators are getting scared, AI doesn't need to be malicious it simply needs to be used by humans for nefa

      • it's getting better so fast now it's scary.

        It's really not

        • by MrL0G1C ( 867445 )

          Not what, getting better fast or getting scary? If you're not scared then you don't have an imagination. These AIs are rapidly becoming as intelligent as humans, if you don't believe me then you haven't been paying attention lately.

          AI is currently so good that it can literally help in the process of improving itself, if that doesn't scare you then maybe you're an AI.

          • These AIs are rapidly becoming as intelligent as humans,

            They really aren't.

            AI is currently so good that it can literally help in the process of improving itself,

            So what? We've been using computers to help design computers since the 50s. You need to calibrate your fear meter. It's not scary.

            • by MrL0G1C ( 867445 )

              With respect, they really are, you need to go look at what AI's can do now, I'm not saying AIs are like humans but they can answer very complex questions now with correct answers. AIs this year, this month, this week, is now far ahead of where it was just a month ago, GPT4 is far ahead of GPT3.

              Watch the following video to see how capable AI is becoming:
              https://youtu.be/5SgJKZLBrmg [youtu.be]

              Now you can't watch that and tell me that AI isn't becoming generally intelligent.

              Your answer that AI really isn't intelligent mi

              • ok, I watched your video. I see nothing in there that is scary.
                • by MrL0G1C ( 867445 )

                  AI's are about to be as intelligent as humans, very likely within a few months now.

                  However dangerous humans are - AIs could be equally dangerous if not more.

                  • AI's are about to be as intelligent as humans,

                    No lol.

                    • by MrL0G1C ( 867445 )

                      You're really not paying attention to what's going in AI if you think AIs aren't currently reaching human levels, I'm not talking about ChatGPT3 which is miles behind ChatGPT4 but simply about ChatGPT5 and other AIs which on the current trend will be as intelligent as humans in most ways and quite possibly distinguishable from humans soon.

                      AI has come along massively in the last 6 months, even the top AI researchers that create AI are getting worried about what they're creating.

                      Do you have an actual reason w

                    • by MrL0G1C ( 867445 )

                      ChatGPT4 is still making mistakes but the AI scientists are coming up with improvements that could end enough of the mistakes to bring AI up to human level.

                      You can pretend it's not happening or you could subscribe to this channel and watch as we race towards full general AI:
                      https://youtu.be/wHiOKDlA8Ac [youtu.be] (two minute papers)

                    • by MrL0G1C ( 867445 )

                      And watch this and tell me this isn't mind-blowing:
                      https://youtu.be/6NoTuqDAkfg [youtu.be]

                    • even the top AI researchers that create AI are getting worried

                      Which one.

                      Do you have an actual reason why you think AI won't be as intelligent as humans soon?

                      "Soon" is a different question. Someone could be working in their garage on something none of us know about, and release it tomorrow.

                      As for chatGPT, it's not Turing Complete. Not even close.

                    • by MrL0G1C ( 867445 )

                      You haven't watched the videos I linked, I can tell because if you had then you wouldn't be saying that.

                    • by MrL0G1C ( 867445 )

                      Turing complete is for computers - can the machine process an instruction set that allows them to solve many computational problem types.

                      If you are talking about the turning test then yes I expect AI models to be able to fool humans into thinking it is human by this time next year and other than that it is pretty incredible what AI can do right now.

                      I do very much think we are about to experience a huge paradigm shift, AI will replace millions of jobs within the next 5 years.

                    • by MrL0G1C ( 867445 )

                      Here are some of the biggest minds in AI saying we need to pause right now because we're playing with fire. The creators of AI state that they have not solved the 'alignment' problem and they do not decisively know how to fix the alignment AKA morals of AI.

                      https://youtu.be/8OpW5qboDDs [youtu.be]

                      The open letter calling for the pause:
                      https://futureoflife.org/open-... [futureoflife.org]

                      They know we are on the verge of general AI right now, not ten years or thirty years or sixty years away but now months away - this year.

                    • You haven't watched the videos I linked,

                      Yeah, I only watched one of them. I'm not going to try to read your mind to figure out what part you think is important.

                    • The thing that scares you seems to be that AI is "self-teaching." It takes more than self-teaching to develop human-like AI.

                      We are not even close, and you are just being scared because that's what you want.
                    • by MrL0G1C ( 867445 )

                      I think we're close to general AI because of the advances in AI that have happened in the last few months. Regardless to the definition AI will be capable enough in so many areas as to be indistinguishable from general AI.

                      I'm waiting to hear the reason why you don't think we will have such advanced AI when there is currently tens of billions of dollars being poured by astute people who recognise that strong AI is indeed possible.

                    • I don't even think you're scared. You're just trying to scare other people with your comments.
                    • by MrL0G1C ( 867445 )

                      That's some kind of nonsensical statement. This isn't the kind of fear one has when watching a truly scary horror film (not that those scare me anyway), this is a rational understanding that AIs could be very dangerous. Unlike Isaac Asimov's rules for robots these AIs don't have a moral code and the creators aren't sure if they can bolt one on. See: https://youtu.be/8OpW5qboDDs [youtu.be] "'Pause Giant AI Experiments' - Letter Breakdown w/ Research Papers, Altman, Sutskever and more"

                      The creators of AI have a rational

  • AI is useless .. so far I haven't been able to do much useful with chatGPT .. and I've tried .. and yes I've tried using it "properly" and the right way .. Even though I have coded transformers myself and know how they work and thus what they can do .. I still wasted time reading and watching tutorials on how to get the most out of chatGPT .. frankly I still say for me it's a glorified Google .. actually google is usually faster at getting what I need --even contextually. That said, I could see how kids cou

    • AI is useless .. so far I haven't been able to do much useful with chatGPT .. and I've tried ..

      The problem is that "AI" like chatGPT is not actually an artificial intelligence. It is a very complex pattern-recognition algorithm that operates with no intelligence at all, just matches patterns.

      When it writes something, it doesn't think about it. It just operates on "here is the pattern for this particular type of essay, here is the place in the essay where a fact is to be inserted, here is the type of fact that might be inserted, here is the place that one typically might find the type of fact to inse

      • (which does beg the question, why do we think that human intelligence is anything other than pattern recognition?)

        I think it's pattern recognition with cross checking. Which is, of course, theoretically something you could do with these "AI" models. You'd build multiple models and then use them against one another to determine plausibility. I think this is basically what our brain does, which is why for example audio sync problems are so distracting. Our brain is letting us know that the audio and video don't match. I'm not sure what the cross-checking bases for ideas would be though... Plausibility based on reputation

      • operates with no intelligence at all, just matches patterns.

        It's all fuzzy statistical analysis. Inputs or learning nudge things in certain directions until (deeply hidden, indirect and obtuse) thresholds are crossed and the output behavior changes. The problem is that in the human world there are absolutes. Sure, it's fun to say "there are no absolutes" and that everything is just shades of grey, but we aren't needing some philosophical chatbot here, but something that can produce practical results. That requires that absolutes be defined and not just learned, but

  • Anthropic says their project is to build AI "10 times more capable than today’s most powerful AI."

    What does that even mean? How does one quantify AI, exactly?

    It seems kind of like how search engines will show "Found 5,000,000 results." But 99.99% of them are irrelevant, and if you get past the second page, you might as well stop looking.

If a thing's worth having, it's worth cheating for. -- W.C. Fields

Working...