Sei but for LLM; How a LLM solution that has barely a few months could revolutionize the way inference is performed




  • Exo supports Llama, Mistral, Llava, Qwen and Deepseek
  • It can be executed in Linux, Macos, Android and iOS, but not Windows
  • The AI ​​models need 16 GB of RAM can be executed on two laptops of 8 GB

The execution of large language models (LLM) generally requires expensive and high -performance hardware with substantial memory and GPU power. However, the EXO software now seeks to offer an alternative to enable the inference of artificial intelligence (AI) distributed in a network of devices.

The company allows users to combine the computer power of multiple computers, smartphones and even single -plate computers (SBC) such as Raspberry Pis to run models that would otherwise be inaccessible.

Leave a Comment

Your email address will not be published. Required fields are marked *