B4J Library Ollama4j library - Pnd_Ollama4j - Your local offline LLM like ChatGPT

What is Ollama?
Ollama is a free and open-source project that lets you run various open source LLMs locally.
For more info check this link.

Ollama for Windows and system requirements

Download Ollama server

Top LLMs for Coding All Developers Should Know About

Short video (3 minutes) about Ollama on You Tube

When you finish Ollama server installation, you can run example app to get model of your choice.
You can chat from application, download and delete models, for now.
When you choose model and tag, check DOWNLOAD SIZE, right bottom on image below, some models are over 200GB.
You can start with MODEL: mistral and TAG: 7b, download size is 4.1GB

Hardware requirements:
RAM: 8GB for running 3B models, 16GB for running 7B models, 32GB for running 13B models
Disk Space: 5GB for installing Ollama, additional space required for storing model data, depending on the models you use.
CPU: Any modern CPU with at least 4 cores is recommended, for running 13B models a CPU with at least 8 cores is recommended.
GPU(Optional): A GPU is not required for running Ollama, but it can improve performance, especially for running larger models. If you have a GPU, you can use it to accelerate training of custom models.




Chat example:




Wrapper is based on Ollama4j v1.0.90 from HERE.
This wrapper is in a very early stage of development, but you can try it.
Download library from: https://www.dropbox.com/scl/fi/ejbr...ey=77w3vc48q4re1bmw80pkw5m0p&st=hornosqn&dl=0
 

Attachments

  • Ollama4jExample.zip
    8.9 KB · Views: 15
Last edited:
Cookies are required to use this site. You must accept them to continue using the site. Learn more…