llama.go is like llama.cpp in pure Golang. The code of the project is based on the legendary ggml.cpp framework of Georgi Gerganov written in C++ with the same attitude to performance and elegance. Both models store FP32 weights, so you'll needs at least 32Gb of RAM (not VRAM or GPU RAM) for LLaMA-7B. Double to 64Gb for LLaMA-13B.

Features

  • Tensor math in pure Golang
  • Implement LLaMA neural net architecture and model loading
  • Test with smaller LLaMA-7B model
  • Be sure Go inference works exactly same way as C++
  • Let Go shine! Enable multi-threading and messaging to boost performance
  • Cross-patform compatibility with Mac, Linux and Windows

Project Samples

Project Activity

See All Activity >

License

MIT License

Follow LLaMA.go

LLaMA.go Web Site

You Might Also Like
Finance Automation that puts you in charge Icon
Finance Automation that puts you in charge

Tipalti delivers smart payables that elevate modern business.

Our robust pre-built connectors and our no-code, drag-and-drop interface makes it easy and fast to automatically sync vendors, invoices, and invoice payment data between Tipalti and your ERP or accounting software.
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of LLaMA.go!

Additional Project Details

Programming Language

Go

Related Categories

Go Large Language Models (LLM)

Registered

2023-08-25