If you're not familiar with Ollama4j, first read this post to set up the Ollama server and install one of the models:
www.b4x.com
Currently, there are 5 options: Raw Text, Find Bugs, Refactor, Optimize, and Explain. You can add more if you want, source code is attached to this post.
I tested with different models, and for B4X, the best result is given by the qwen2.5-coder:14b model, which provides a fairly satisfactory result.
Since there are many different models, I haven't tested all of them, so it’s quite possible that some other model gives a better result.
The 14B model is the maximum I can run on my PC (Nvidia 1080TI with 11GB RAM and Ryzen 3950X with 32GB RAM) while maintaining decent speed; larger models probably give significantly better results.
Ollama4j library - Pnd_Ollama4j - Your local offline LLM like ChatGPT
What is Ollama? Ollama is a free and open-source project that lets you run various open source LLMs locally. For more info check this link. Ollama for Windows and system requirements Download Ollama server Top LLMs for Coding All Developers Should Know About Short video (3 minutes) about...
Currently, there are 5 options: Raw Text, Find Bugs, Refactor, Optimize, and Explain. You can add more if you want, source code is attached to this post.
I tested with different models, and for B4X, the best result is given by the qwen2.5-coder:14b model, which provides a fairly satisfactory result.
Since there are many different models, I haven't tested all of them, so it’s quite possible that some other model gives a better result.
The 14B model is the maximum I can run on my PC (Nvidia 1080TI with 11GB RAM and Ryzen 3950X with 32GB RAM) while maintaining decent speed; larger models probably give significantly better results.
Attachments
Last edited: