Close
0%
0%

Build Local LLM agent on Raspberry Pi 5

I have build a local LLM Agent on my Raspberry Pi 5 with M.2 NVME SSD & PoE + Hat

Similar projects worth following
255 views
As we know, we can attach a M.2 NVMe SSD via PCIe expansion board, and connect it via FFC cable to Raspberry Pi 5 on PCIe slot, it will provide high perfornance on data transfering and speed improvement. and at the same time, the local large language model is more and more popular right now, I want to build my own Local LLM server, so I've tried installed ollama on my Raspberry Pi 5 and pulling down llama3 and phi3 model to build my own AI agent, it works fine, although it is a little bit slow, but I have purchased Hailo-8L AI co-processor for further development. BTW, I almost forot the PoE hat, it will provides 5v@4.5A power to my Raspberry Pi via Ethernet cable which connected to my Meert AYDLINLATMA PoE Power Supply( OUTPUT: 48V/1A 48W 802.3at protocol)
It works fine and I am going to build webUI such as lobe-chat or other UI to deploy it on my local network.

KZ-0066-01.jpg

JPEG Image - 973.14 kB - 06/18/2024 at 08:09

Preview

KZ-0066-05.jpg

JPEG Image - 852.90 kB - 06/18/2024 at 08:09

Preview

KZ-0066-02.jpg

JPEG Image - 1002.68 kB - 06/18/2024 at 08:09

Preview

KZ-0066-08.jpg

JPEG Image - 852.15 kB - 06/18/2024 at 08:09

Preview

KZ-0066-09.jpg

JPEG Image - 904.83 kB - 06/18/2024 at 08:09

Preview

View all 10 files

Enjoy this project?

Share

Discussions

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates