Results 1 to 4 of 4

Thread: Does exist wrapper (or bindings) for llama.cpp coded in thinBasic? (module suggest)

  1. #1
    Junior Member
    Join Date
    Apr 2024
    Posts
    3
    Rep Power
    0

    Does exist wrapper (or bindings) for llama.cpp coded in thinBasic? (module suggest)

    Hi. I'm searching bindings/wrapper for llama.cpp coded in any dialect of Basic or similiar easy-to-learn language. Does anybody know something like that? If there is no such module in thinBasic why not to create it? This is a popular AI technology that can promote thinBasic. Thanks for your attention.

  2. #2
    thinBasic author ErosOlmi's Avatar
    Join Date
    Sep 2004
    Location
    Milan - Italy
    Age
    57
    Posts
    8,817
    Rep Power
    10
    Ciao,

    are you referring to this: https://github.com/ggerganov/llama.cpp

    Very interesting, and lot to study

    Personally I'm just an OpenAi standard user using it to study AI possibilities and get some ideas.

    At work we used some AI features from Microsoft Azure Cognitive Services for automating some boring process.
    We used Sentiment Analysis and Text Classification to verify and classify users feedback on products purchasing integrating our ecommerce web site that is hosted under AWS with company backend databases using Azure Functions as a glue to integrate the two worlds.
    And it is working pretty well ... so fare more than 200k customers recensions have been analyzed and integrated.

    Will see.
    I've downloaded and installed LM Studio from https://lmstudio.ai/ to get some practice on spare time.

    Anything to suggest on how to proceed?

    Thanks
    Eros
    www.thinbasic.com | www.thinbasic.com/community/ | help.thinbasic.com
    Windows 10 Pro for Workstations 64bit - 32 GB - Intel(R) Xeon(R) W-10855M CPU @ 2.80GHz - NVIDIA Quadro RTX 3000

  3. #3
    Junior Member
    Join Date
    Apr 2024
    Posts
    3
    Rep Power
    0
    Quote Originally Posted by ErosOlmi View Post
    Ciao,

    Anything to suggest on how to proceed?

    Thanks
    Eros
    As far as I know Pascal is more easy to understand than C++ so i would recommend to study Pascal wrapper of llama.cpp. This file https://github.com/ortegaalfredo/fpc...main/llama.pas is Pascal port of C++ https://github.com/ggerganov/llama.c...master/llama.h. So this is code of a simple console app that can chat offline with AI models in .gguf format: https://github.com/ortegaalfredo/fpc.../llama_cli.lpr It uses llama.pas bindings to call functions from llama.dll. Llama.dll can be obtained from Neurochat installer: https://github.com/ortegaalfredo/neu...etup-win64.exe After installing Neurochat you have to open it's folder and there you will see llama.dll. Then you should copy llama.dll to the folder where you have already put llama.pas, llama_cli.lpr and llama_cli.lpi (all these files can be found in this repo: https://github.com/ortegaalfredo/fpc-llama). After that you have to download and install free Lazarus IDE: https://sourceforge.net/projects/laz...4.exe/download Then open llama_cli.lpr in Lazarus and compile it. Now you will have llama_cli.exe that uses llama.dll to chat with files of AI models. An example of local AI model: https://huggingface.co/TheBloke/Llam...?download=true To run it with llama_cli.exe you will need more than 8gb RAM. If you have less RAM then try this version: https://huggingface.co/TheBloke/Llam...chat.Q4_0.gguf Other AI models can be found here: https://huggingface.co/TheBloke

    If you don't like Pascal you can study other bindings/wrappers: C#/.NET - https://github.com/SciSharp/LLamaSharp, JavaScript/Wasm (works in browser) - https://github.com/tangledgroup/llama-cpp-wasm

    Also you can study one-file C implementation of offline AI chatbot: https://github.com/karpathy/llama2.c/blob/master/run.c This project also has bindings/wrappers: C# - https://github.com/trrahul/llama2.cs, JavaScript - https://github.com/dmarcos/llama2.c-web Other bindings: https://github.com/karpathy/llama2.c...#notable-forks

    Recently i discovered a Harbour wrapper to llama.cpp: https://gitflic.ru/project/alkresin/llama_prg The code of it's minimal console AI chatbot looks rather simple: https://gitflic.ru/project/alkresin/...&branch=master A liitle bit more complicated than code of an average Basic dialect but still understandable. But the problem is that this console chatbot calls functions from hllama.cpp https://gitflic.ru/project/alkresin/...&branch=master So Harbour wrapper is written in C++ and it's quite hard to study it if you are a newbie in coding. However i compiled llama.lib from Harbour repo and now have some questions to you and other expirienced coders in this forum. Can thinBasic call functions from .lib files compiled in MSVC? Or maybe thinBasic has it's own format of .lib files? If so, does exist any way to convert MSVC .lib to thinBasic .lib? Can MSVC .lib be converted to .dll file? If yes, then AI chatting functions can be called from this .dll and so will become acceptable from thinBasic code (using some link command), right? I also have compiled llama.obj, hllama.obj and other .obj files. And here come the same questions: Can thinBasic use MSVC .obj files? Do they need to be converted to native thinBasic format? Does exist a way to make a .dll from .obj files? I can share llama.lib, hllama.obj and other files. So just notify me if you need them.

    One last thing. I talked to coder Cory Smith (founder of gotBasic.com). He converted code of run.c (from llama2.c project mentioned earlier) to VB.NET. I'm not a fan of VB.NET, i prefer more simple and lightweight Basic dialects. But this converted code may be useful for you: https://jmp.sh/s/vIwmomlOkJrswE9ww33K
    Last edited by JohnClaw; 18-04-2024 at 16:27.

  4. #4
    Junior Member
    Join Date
    Apr 2024
    Posts
    3
    Rep Power
    0
    I possibly found simplified version of llama.cpp: https://github.com/tinyBigGAMES/Dllama

    Minimal code to chat with llms:

    uses
    System.SysUtils,
    Dllama,
    Dllama.Ext;

    var
    LResponse: string;
    LTokenInputSpeed: Single;
    LTokenOutputSpeed: Single;
    LInputTokens: Integer;
    LOutputTokens: Integer;
    LTotalTokens: Integer;

    begin
    // init config
    Dllama_InitConfig('C:\LLM\gguf', -1, False, VK_ESCAPE);

    // add model
    Dllama_AddModel('Meta-Llama-3-8B-Instruct-Q6_K', 'llama3', 1024*8, '<|start_header_id|>%s %s<|end_header_id|>',
    '\n assistant:\n', ['<|eot_id|>', 'assistant']);

    // add messages
    Dllama_AddMessage(ROLE_SYSTEM, 'you are Dllama, a helpful AI assistant.');
    Dllama_AddMessage(ROLE_USER, 'who are you?');

    // display the user prompt
    Dllama_Console_PrintLn(Dllama_GetLastUserMessage(), [], DARKGREEN);

    // do inference
    if Dllama_Inference('llama3', LResponse) then
    begin
    // display usage
    Dllama_Console_PrintLn(CRLF, [], WHITE);
    Dllama_GetInferenceUsage(@LTokenInputSpeed, @LTokenOutputSpeed, @LInputTokens, @LOutputTokens,
    @LTotalTokens);
    Dllama_Console_PrintLn('Tokens :: Input: %d, Output: %d, Total: %d, Speed: %3.1f t/s',
    [LInputTokens, LOutputTokens, LTotalTokens, LTokenOutputSpeed], BRIGHTYELLOW);
    end
    else
    begin
    Dllama_Console_PrintLn('Error: %s', [Dllama_GetError()], RED);
    end;
    Dllama_UnloadModel();
    end.

    I talked to author of Dllama. He will help me to create bindings for BCX or even make it by himself. If you are interested to create bindings for ThinBasic, join Dllama discord: https://discord.gg/tPWjMwK

Similar Threads

  1. Sigil Wrapper
    By LeftyG in forum Sources, Templates, Code Snippets, Tips and Tricks, Do you know ...
    Replies: 1
    Last Post: 12-02-2022, 18:29
  2. MCI Video Wrapper
    By Petr Schreiber in forum Experimental modules or library interface
    Replies: 20
    Last Post: 17-07-2020, 20:55
  3. OpenB3D Engine Wrapper for ThinBasic (open testing)
    By Artemka in forum TBGL Scripts and Projects
    Replies: 4
    Last Post: 12-03-2013, 23:38
  4. For Dan - Do we even exist?
    By LanceGary in forum Shout Box Area
    Replies: 5
    Last Post: 24-03-2011, 07:43
  5. Replies: 5
    Last Post: 18-03-2010, 08:28

Members who have read this thread: 0

There are no members to list at the moment.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •