Skip to content
Media 0 for listing Local LLM Plugin

Description

Documentation: Link

Demo Project: GitHub , EXE

Demo Video: English, Japanese

This product is designed to integrate AI chatbots into games without using online services.

This plugin allows to load large language models (LLMs) of GGUF format and run them on Unreal Engine.

Run locally and within BP/C++
  • Runs offline on a local PC.

  • Just add one component to your BP and you are ready to use it.

  • No Python or dedicated server is required.

Useful features
  • Works asynchronously, additional questions can be asked at any time during answer generation.

  • You can save and load "state" that preserve the context of a conversation, allowing you to resume a previous conversation later.

  • Supports multibyte characters.

Hardware Requirements

Requires a CPU that supports AVX, AVX2 and FMA.

The following CPUs should work, but please try our free demo exe to check if your CPU is supported.

  • Intel: 4th Generation (Haswell) and above

  • AMD: All Ryzen series

Supported models

The following models have been tested to work.

- llama-3 7B

- Phi-3-medium

- Gemma 7B

- Gemma-2 9B

- Mistral 7B

- ArrowPro 7B KUJIRA (Japanese model)

- ELYZA JP 8B (Japanese model)

Included formats

  • logo of Unreal Engine format

Tags