edit:the solution is post #3. I don't know why I don't get the the option to mark it.
I am working on AN AI app with a B4J front end using LLAMA. The app is attached but it has the LLM removed because it is too large (9.7gig). It also requires Llama be installed in your python distro. The ide hangs every 1-5 minutes, app running or not and I have to force close. This is on a new LG GRAM with a Intel(R) Core(TM) Ultra 7 155H (3.80 GHz) and 32 gigs of ram. No other program is effected (Affected? I can never keep those straight
) I have a pretty big b4J app that I opened after this started happening and I exercised it pretty vigorously and had no issues. I will also point out this started happening well before I succeeded in getting the LLM to load. One clue when I tried to delete the LLM so I could zip the program (zip failed error code -1) and couldn't. It was locked by openjdk and it was very busy as you can see from the task manager entry (sorry no titles 11.7 is CPU usage) even though ALL B4J apps were closed.
I will also point out that this issue was happening with the built in Python. I only switched to the global version when I tried to load Llama and the code failed because the B4J shipped version of python doesn't have it. I HAVE had the LLM in the files directory for a while and it MAY (size?) have something to do with the issue.
When responding please note that I know NOTHING about programming in python. I am relying on copilot to write that.
If you want to get the app running you can download the llm, sqlcoder.Q4_K_M.gguf HERE and the Llama is a precompiled wheel llama_cpp_python-0.3.16-cp313-cp313-win_amd64.whl available HERE FYI I was not able to get the wheel to properly install from the link. I had to download the file and thein install it.
It is a working AI app but I will tell you now the query results are S$%#. Copilot identified the problem right away. It says the model is hallucinating. It is also slow. 1 minute for a useless answer but I am using a CPU only wheel because I have no graphics card. If anybody tries it with an accelerated hardware wheel I would love to know the results.
I am working on AN AI app with a B4J front end using LLAMA. The app is attached but it has the LLM removed because it is too large (9.7gig). It also requires Llama be installed in your python distro. The ide hangs every 1-5 minutes, app running or not and I have to force close. This is on a new LG GRAM with a Intel(R) Core(TM) Ultra 7 155H (3.80 GHz) and 32 gigs of ram. No other program is effected (Affected? I can never keep those straight
I will also point out that this issue was happening with the built in Python. I only switched to the global version when I tried to load Llama and the code failed because the B4J shipped version of python doesn't have it. I HAVE had the LLM in the files directory for a while and it MAY (size?) have something to do with the issue.
When responding please note that I know NOTHING about programming in python. I am relying on copilot to write that.
If you want to get the app running you can download the llm, sqlcoder.Q4_K_M.gguf HERE and the Llama is a precompiled wheel llama_cpp_python-0.3.16-cp313-cp313-win_amd64.whl available HERE FYI I was not able to get the wheel to properly install from the link. I had to download the file and thein install it.
It is a working AI app but I will tell you now the query results are S$%#. Copilot identified the problem right away. It says the model is hallucinating. It is also slow. 1 minute for a useless answer but I am using a CPU only wheel because I have no graphics card. If anybody tries it with an accelerated hardware wheel I would love to know the results.
Attachments
Last edited: