
- Hyperlink runs entirely on local hardware, keeping all searches private.
- App indexes massive data folders on RTX PC in minutes
- LLM inference on Hyperlink doubles its speed with latest Nvidia optimization
Nexa.ai’s new “Hyperlink” agent introduces an approach to AI search that runs entirely on local hardware.
Designed for Nvidia RTX AI PCs, the app works as an on-device assistant that converts personal data into structured information.
Nvidia described how, instead of sending queries to remote servers, it processes everything locally, offering speed and privacy.
Private intelligence at local speed
Hyperlink has been evaluated on an RTX 5090 system, where it reportedly delivers up to 3x faster indexing and 2x the speed of large language model inference compared to previous versions.
These metrics suggest that it can scan and organize thousands of files on a computer more efficiently than most existing AI tools.
The hyperlink does not simply match search terms, as it interprets the user’s intent by applying LLM’s reasoning capabilities to local files, allowing you to locate relevant material even when the file names are obscure or unrelated to the actual content.
This shift from static keyword searching to contextual understanding aligns with the growing integration of generative AI into everyday productivity tools.
The system can also connect related ideas from multiple documents, offering structured answers with clear references.
Unlike most cloud-based assistants, Hyperlink keeps all user data on the device, so the files you scan, from PDFs and slides to images, remain private, ensuring no personal or sensitive information leaves the computer.
This model appeals to professionals who handle sensitive data and still want the performance benefits of generative AI.
Users gain access to quick contextual responses without the risk of data exposure that accompanies remote storage or processing.
Nvidia’s optimization for RTX hardware extends beyond search performance, as the company claims that Recovery Augmented Generation (RAG) now indexes dense data folders up to three times faster.
A typical 1 GB collection that previously took almost 15 minutes to process can now be indexed in about 5 minutes.
Improved inference speed also means that answers appear more quickly, making everyday tasks such as meeting preparation, study sessions, or analyzing reports easier.
Hyperlink combines convenience with control by combining local reasoning and GPU acceleration, making it a useful AI tool for people who want to keep their data private.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.



