Nvidia continues to invest in its AI initiatives, and thanks to its latest update, the latest one, ChatRTX, is no exception.
According to the tech giant, ChatRTX is a “demo app that allows you to personalize GPT large-scale language models (LLMs) connected to your own content.” This content will consist of local documents, files, folders, etc. on his PC, and we will essentially build a custom AI chatbox from that information.
Because it doesn't require an Internet connection, users can quickly access answers to queries that may be buried under computer files. With the latest update, you can now access even more data and his LLMs, including Google Gemma and ChatGLM3, an open bilingual (English and Chinese) LLM. It also allows searching for photos locally and also supports Whisper, allowing users to converse with his ChatRTX through his AI automatic voice recognition program.
Nvidia powers ChatRTX's AI using TensorRT-LLM software and RTX graphics cards. It’s also much more secure than online AI chatbots because it’s local. You can download ChatRTX here To try it for free.
Can AI escape ethical dilemmas?
The concept of an AI chatbot that uses your PC's local data rather than training (stealing) other people's online work is pretty interesting. It appears to solve the ethical dilemma of using or hoarding copyrighted works without permission. It also appears to solve another long-term problem plaguing many PC users: finding files long buried in File Explorer, or at least information locked inside File Explorer. .
However, an obvious question is how a very limited data pool can negatively impact chatbots. Unless the user is particularly skilled at learning AI, this could become a serious problem in the future. Of course, using it solely to search for information on your PC is perfectly fine and probably appropriate usage.
But the point of an AI chatbot is to have unique and meaningful conversations. Perhaps there was a time when it was possible to do that without rampant theft, but companies have been using words stolen from other sites to power their AI, and now we're irrevocably tied together. .
Given that data theft is a key part of the process of enabling chat to be balanced enough to not get caught up in feedback loops, it's possible that Nvidia could be the middle ground for generative AI. We hope Nvidia gets it right, because once fully developed, it may prove that no ethical violations are necessary to power and shape them.