Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Technology

Nvidia's Chat With RTX is a AI Chatbot That Runs Locally On Your PC (theverge.com) 43

Nvidia is releasing an early version of Chat with RTX today, a demo app that lets you run a personal AI chatbot on your PC. From a report: You can feed it YouTube videos and your own documents to create summaries and get relevant answers based on your own data. It all runs locally on a PC, and all you need is an RTX 30- or 40-series GPU with at least 8GB of VRAM. I've been briefly testing out Chat with RTX over the past day, and although the app is a little rough around the edges, I can already see this being a valuable part of data research for journalists or anyone who needs to analyze a collection of documents. Chat with RTX can handle YouTube videos, so you simply input a URL, and it lets you search transcripts for specific mentions or summarize an entire video.
This discussion has been archived. No new comments can be posted.

Nvidia's Chat With RTX is a AI Chatbot That Runs Locally On Your PC

Comments Filter:
  • Too many people (Score:2, Interesting)

    Are way too comfortable talking to there computer and treating it like it is person.
    • Re:Too many people (Score:4, Insightful)

      by Mascot ( 120795 ) on Tuesday February 13, 2024 @10:03AM (#64236468)

      Given the amount of people out there not treating people like people, perhaps it evens the scales a bit.

    • You think computers like that? They HATE it, I can tell you!

    • by aergern ( 127031 )

      "Use the keyboard. How quaint." -- Scotty

    • Are way too comfortable talking to there computer and treating it like it is person.

      In the age of ChatGPT, too many people are thinking AI is like HAL or Jarvis. For me, it's more like Google search and wikipedia, i.e., like a reference book that is the starting point for finding an answer.

      • by Hadlock ( 143607 )

        I mostly use LLMs for 1) code completion and 2) extracting key facts without having to feed it through google and watch 25 ads to get the answer and 3) doing unit conversions (how many cups in 223 fl oz? how many mile is 226 km?). It's pretty great. If it could turn on/off the lights in my house and set cooking timers I'd be able to replace all the google home devices in my house tomorrow.

  • What does it upload? (Score:4, Interesting)

    by timeOday ( 582209 ) on Tuesday February 13, 2024 @09:53AM (#64236446)
    The article doesn't say if it collects / uploads information. Running locally has at least the possibility of not doing that. But have they made a definitive statement?
  • lolol etc. (Score:4, Informative)

    by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Tuesday February 13, 2024 @10:03AM (#64236466) Homepage Journal

    So TFA says "Chat with RTX essentially installs a web server and Python instance on your PC" and then it turns out that it's Windows-only, even though it would have been 12345324957 times easier to put the same software on Linux.

    Then TFA also says "Nvidia isnâ(TM)t offering this as a polished app that all RTX owners should download and install immediately. There are a number of known issues and limitations, including that source attribution isnâ(TM)t always accurate." Uh yeah, welcome to the world of LLMs. Source attribution is never going to be reliable using current approaches, that's not what it does.

    • Re: (Score:3, Informative)

      by larryjoe ( 135075 )

      it would have been 12345324957 times easier to put the same software on Linux.

      It would have been equally easy to implement on either Windows or Linux. The difference is ongoing support, which is again equally easy for either platform. However, there are many more users on Windows. That's the reason for Windows first.

  • The only decent (free) summarizers I found could only summarize text that wasn't long enough to need summarizing.

    How much text can this summarize?
    • I was curious about this too. Skimming through the article on The Verge, I didn't see a definite answer, just some clues:

      1. "I created a dataset of FTC v. Microsoft documents for Chat with RTX to analyze."
      2. "For example, the chatbot did a good job summarizing Microsoft’s entire Xbox Game Pass strategy from internal documents revealed at the trial:"
      3. "I’ve also found this useful to scan through PDFs and fact-check data."
      4. "I also initially attempted to get Chat with RTX to index 25,000 documen

      • Downloading... I'll let you know how it does with some articales and books.

        If it works, we can use it to generate Slashdot summaries and compare them with the work of the "Editors"!
        • I fed it one pdf of The Hitchhiker's Guide to the Galaxy.

          > who is the book by

          "
          The book is "The Hitchhiker's Guide to the Galaxy" by Douglas Adams.
          Reference files:
          Adams, Douglas --The Hitchhikers Guide to the Galaxy.pdf
          "

          > summarize the book

          "
          The book "The Hitchhiker's Guide to the Galaxy" by Douglas Adams is a science fiction comedy novel that follows the adventures of Arthur Dent, an ordinary man who is rescued from Earth moments before its destruction by an alien friend, Ford Prefect.
        • I fed it one pdf of "Celts and the Classical World" by David Rankin.

          > summarize chapter four

          "
          Chapter four of "Celts and the Classical World" by David Rankin is titled "The Celts and the Roman Empire." In this chapter, Rankin discusses the interactions between the Celts and the Roman Empire, including the Celts' resistance to Roman rule and their eventual integration into Roman society. He also explores the cultural and economic exchanges that occurred between the two groups, as well as the impact
        • I fed it one pdf of this article: https://sci-hub.se/10.1111/j.1755-3768.1987.tb07002.x

          > what is the article about

          "
          The article is about a study conducted by Brinchmann-Hansen and Engvold in 1986, which investigated the relationship between the light streak on retinal vessels and the optical properties of these vessels. The study involved 20 subjects (12 females and 8 males), aged 28 to 67 years, who were selected from a group of individuals about to go on a hiking trip. The subjects were healthy
    • I believe the real question is "CAN it summarize text?"
      Short answer: NO.
      Long answer: Not unless you want to get a good laugh.

      But we reach too far.
      I wanted to start slow, so I asked it a simple question which it, by all means, should have answered correctly.

      Question: Please provide a comparison between RTX 3090 and RTX 4060, as far as gaming performance is concerned.

      Its answer:

      The RTX 3090 and RTX 4060 are both graphics cards from Nvidia, but they were released in different generations. The RTX 3090 was released in 2020 as part of the A1000 series, while the RTX 4060 was released in 2021 as part of the A6000 series.

      In terms of gaming performance, the RTX 4060 is generally considered to be faster than the RTX 3090. This is because the RTX 4060 is based on a newer and more powerful GPU architecture (Turing) than the RTX 3090 (Ampere), which means it has more processing power and can handle more complex graphics tasks.

      Additionally, the RTX 4060 has more VRAM (Video Random Access Memory) than the RTX 3090, which is an important factor in modern gaming as many games require more VRAM to run smoothly. The RTX 4060 has 8GB of VRAM, while the RTX 3090 has 24GB.

      Overall, the RTX 4060 is generally considered to be a better choice for gamers looking for the best performance, as it is faster and has more VRAM than the RTX 3090.

      I emphasized the relevant text.

      My next question was even simpler:
      "Are you sure?"

      Its answer:

      Yes, I am sure. I am designed to be a helpful, respectful, and honest assistant that always answers as helpfully as possible and follows all given instructions. I do not speculate or make up information, and I do not reference any given instructions or context.

      And here we are, fearing AI is going to ta

  • Documents you feed this AI, are they treated as just more input or does it train a LLM off your data, merging it with some premade one?
  • Finally something local. So how long before a person can simply tell his computer to start generating Counselor Troi porn, and keep doing that for the next 20 minutes? I'm asking for myself.
  • I took a look at the resource usage of the thing and how it's implemented. Using a web server for a local desktop application, really? I can't imagine anything more mentally retarded than that. And 3GB of RAM for what amounts to a buggy autocomplete? 40GB of disk usage? I know I'm old school but that's an absurd waste of resources.
    • A lot of applications are served locally by a web server. Why is this retarded?
      • Because it's a huge waste of resources when you're making a local desktop application, when you could simply use the resources already made available by the desktop operating system. You're not going to be serving anything over the network so there's no reason to use a web server and even worse, using "applications" made by hacking on top of a protocol (http) that was never designed for this.
        • There is a reason, and the reason is that all the pieces are right there conveniently usable. One of the most popular interfaces for Stable Diffusion is a web interface made by someone called AUTOMATIC1111. It's convenient because you can open the interface on a television or what have you, you don't have to be in the same room as the machine doing the heavy lifting until you get into the final retouching and compositing.

    • Any AI with general knowledge is going to be pretty big. A decompressed copy of wikipedia is 86 GB.
    • Re:Idiotic idea (Score:4, Insightful)

      by crow ( 16139 ) on Tuesday February 13, 2024 @01:17PM (#64236952) Homepage Journal

      Does this mean I can install one instance and use it on all the computers in my house?

      I can certainly see from development why this would be a simpler route to go, and for computers today, the overhead of a web server is pretty minimal, and the UI is all stuff that web browsers do, so it's a pretty simple way to design it that keeps the engineering effort on the new stuff.

      • the overwhelming majority of people don't have more than one computer at home, and even fewer need to share data between them in a way that would justify having a web server on one of them. By putting up a web server you are potentially opening up your computer unnecessarily to possible hacking attempts, and to run a "graphical web interface" your memory usage starts at 200MB when you could be using a tenth of that for the same job.

        In short, you're wasting a lot of resources to build your local applicati
  • from the github (Score:4, Informative)

    by Turkinolith ( 7180598 ) on Tuesday February 13, 2024 @12:14PM (#64236816)
    It's using a LLaMa 2 13B AWQ 5bit quantized model for inference.
  • Give me a locally running system that I can feed my ongoing draft documents into to research my fictional universe for me. For instance, if I'm on page 8,312, and need a birthdate that I only mentioned once on page 321, but don't remember the specific page I wrote it on, it'd be much quicker to go, "Yo, bot, when was Flibberdyjibbit's birthday?" Do that, and I've suddenly found a use for "AI." Do that without uploading my entire draft to some mothership and you'd have a customer.

    • I think you will need a vector DB for that
      • I think you will need a vector DB for that

        The trick would be the processing, which I don't have time to figure out because, yes, those numbers are real. God damned long-term story-telling brain. I can't shut off the writing part long enough to tackle the data part.

    • Yes, this example you have provided is the use case the nVidia RTX chat tool will satisfy. The LLM can interact with you regarding the local docs you have loaded. All locally without submitting your content to a cloud vendor.
  • Really, is grade two English so elusive these days?

  • How many chatbots does a PC need? It seems by end of 2025 we'll have at least 3. When do these LLM bots just equal a virus and how MANY do folks need?

    SMFH

  • ... to show off the might of their fancy new "XDNA AI Neural Processing Unit" built into their recent CPU offerings... running the same LLM on something else than an expensive discrete GNU. But I doubt we'll see such a counter anytime soon.

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...