lynx

joined 2 years ago
[–] lynx@sh.itjust.works 7 points 2 weeks ago (1 children)

You posted about it here: https://sh.itjust.works/post/11722444 This image is different to your last image.

The images you posted looks like this in gray scale:

This map seems to be unrelated to the actual numbers that i found online, but i also have not found any good statistics for the whole world.

[–] lynx@sh.itjust.works 4 points 2 weeks ago* (last edited 2 weeks ago)

You can already get a taste of the future with this finetune: https://huggingface.co/TheDrummer/Rivermind-24B-v1

Here is a small snippet from their description:

Why Rivermind 24B v1? While other AIs struggle with basic tasks, Rivermind 24B v1 handles complex queries with the precision of a Dyson vacuum cleaning every last speck of dust. It’s not just an AI—it’s your future, optimized.

Ready to upgrade? Try Rivermind 24B v1 today and experience the difference—because tomorrow’s AI is here, and it’s powered by Intel’s cutting-edge processors. 🚀

[–] lynx@sh.itjust.works 4 points 2 months ago (1 children)

There are not many models that support any-to-any, currently the best seems to be Qwen3-Omni, the audio quality is not great and it is not supported by llama.cpp: https://github.com/ggml-org/llama.cpp/issues/16186

[–] lynx@sh.itjust.works 1 points 2 months ago
[–] lynx@sh.itjust.works 2 points 1 year ago (1 children)

I dont know what you mean with steering?

  • Do you want a given output structure, like json or toml?
  • Do you want to align the model, with your dataset of question and answer pairs?

First of all, have you tried giving the model multiple examples of input output pairs in the context, this already helps the model a lot to output the correct format.

Second you can force a specific output structure by using a regex or grammar: https://python.langchain.com/docs/integrations/chat/outlines/#constrained-generation https://github.com/ggerganov/llama.cpp/blob/master/grammars/README.md

And third, in case you want to train a model to respond differently and the previous steps were not good enough, you can fine-tune. I can recommend this project to you, as it teaches how to fine-tune a model: https://github.com/huggingface/smol-course

Depending on the size of the model, that you want to fine-tune and the amount of compute that you have available you can either train by updating all parameters like ORPO or you can train via PEFT (LoRA)

[–] lynx@sh.itjust.works 3 points 1 year ago (1 children)

First of all i think it is a great idea to give the model access to a map. Unfortunately it seems like, that the script is missing a huge part at the end, the loop does not have any content and the Tools class is missing.

[–] lynx@sh.itjust.works 2 points 1 year ago (1 children)

I have found the problem with the cut off, by default aider only sends 2048 tokens to ollama, this is why i have not noticed it anywhere else except for coding.

When running /tokens in aider:

$ 0.0000   16,836 tokens total
           15,932 tokens remaining in context window
           32,768 tokens max context window size

Even though it will only send 2048 tokens to ollama.

To fix it i needed to add a file .aider.model.settings.yml to the repository:

- name: aider/extra_params
  extra_params:
    num_ctx: 32768
 

I've been using Qwen 2.5 Coder (bartowski/Qwen2.5.1-Coder-7B-Instruct-GGUF) for some time now, and it has shown significant improvements compared to previous open weights models.

Notably, this is the first model that can be used with Aider. Moreover, Qwen 2.5 Coder has made notable strides in editing files without requiring frequent retries to generate in the proper format.

One area where most models struggle, including this one, is when the prompt exceeds a certain length. In this case, it appears that the model becomes unable to remember the system prompt when the prompt length is above ~2000 tokens.

[–] lynx@sh.itjust.works 1 points 1 year ago* (last edited 1 year ago) (1 children)

If you want in line completions, you need a model that is trained on "fill in the middle" tasks. On their Huggingface page they even say that this is not supported and needs fine tuning:

We do not recommend using base language models for conversations. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model.

A model that can do it is:

  • starcoder2
  • codegemma
  • codellama

Another option is to just use the qwen model, but instead of only adding a few lines let it rewrite the entire function each time.

[–] lynx@sh.itjust.works 5 points 1 year ago

Split Horizon with Poison Reverse

[–] lynx@sh.itjust.works 8 points 2 years ago

This is probably the only reason microsoft recall exists, as it is completely useless for anything else.

[–] lynx@sh.itjust.works 4 points 2 years ago (1 children)

On Huggingface is a space where you can select the model and your graphics card and see if you can run it, or how many cards you need to run it. https://huggingface.co/spaces/Vokturz/can-it-run-llm

You should be able to do inference on all 7b or smaller models with quantization.

[–] lynx@sh.itjust.works 1 points 2 years ago (1 children)

Thanks for the suggestion, I tried it and the diff view is very good. The setup was not really easy for my local models, but after i set it up, it was really fast. The biggest problem with the tool is that the open source models are not that good, i tried if it could fix a bug in my code and it was only able to make it worse. On a more positive note, you at least do not need to copy all text over to another window and it is great for generating boilerplate code nearly flawlessly every time.

view more: next ›