Free Open-Source Artificial Intelligence

3943 readers
1 users here now

Welcome to Free Open-Source Artificial Intelligence!

We are a community dedicated to forwarding the availability and access to:

Free Open Source Artificial Intelligence (F.O.S.A.I.)

More AI Communities

LLM Leaderboards

Developer Resources

GitHub Projects

FOSAI Time Capsule

founded 2 years ago
MODERATORS
1
2
 
 

EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus today, Switzerland’s first large-scale, open, multilingual language model — a milestone in generative AI for transparency and diversity.

3
4
9
submitted 4 weeks ago* (last edited 4 weeks ago) by CheeseNoodle@lemmy.world to c/fosai@lemmy.world
 
 

So my relevant hardware is:
GPU - 9070XT
CPU - 9950X3D
RAM - 64GB of DDR5

My problem is that I can't figure out how to get a local LLM to actually use my GPU, I tried Ollama with Deepseek R1 8b and it kind of vaguely ran while maxing out my CPU and completely ignoring the GPU.

While I'm here model suggestions would be good too, I'm currently looking for 2 use cases.

  • Something I can feed a document too and ask questions about that document (Nvidia used to offer this) To work as a kind of co-GM to quickly reference more obscure rules without having to hunt through the PDF.
  • Something more storytelling oriented that I can use to generate background for throwaway side NPCs when the players innevitably demand their life story after expertly dodging all the NPCs I actually wrote lore for.

Also just an unrelated asside, Deepseek R1 8b seems to just go into an infinite thought loop when you ask it the strawberry question which was kind of funny.

5
 
 

Recent DeepSeek, Qwen, GLM models have impressive results in benchmarks. Do you use them through their own chatbots? Do you have any concerns about what happens to the data you put in there? If so, what do you do about it?

I am not trying to start a flame war around the China subject. It just so happens that these models are developed in China. My concerns with using the frontends also developed in China stem from:

  • A pattern that many Chinese apps in the past have been found to have minimal security
  • I don't think any of the 3 listed above let you opt out of using your prompts for model training

I am also not claiming that non-China-based chatbots don't have privacy concerns, or that simply opting out of training gets you much on the privacy front.

6
 
 

I'm trying to find a way to translate audio in a FOSS, ideally offline way. I currently use Jan.AI for everything but I realized that I've never tried to upload files to it before, and my current configuration doesn't seem to allow uploading.

7
 
 

after making this post a while ago, i tried out these three techniques for providing tool-result data to the LLM

  • append to assistant message
  • send as user response
  • send model-specific tool-response type

Findings

turns out - the assistant-message appending works great for larger LLMs, but not so well for smol ones.

meanwhile the user-side method works better than expected!

i didnt spend too much time with the model-specific tool role stuff, since i want my tooling to remain model-agnostic.

i will probably switch to the user-side method now for gopilot, leaving behind the assistant-only approach

Tool call formatting improvements

Turns out - my initial tool calling formatting was SUPER token-inefficient - who knew...

So I went from this formatting

okay lemme look that up online
{"tool_name": "web_search", "args": {"query": "how to make milk rice"}}
just put milk and rice in a bowl and mix em

to this, MUCH simpler format

okay lemme look that up online
Tool: web_search("how to make milk rice")
Result: just put milk and rice in a bowl and mix em

which is like - just.... WAY better!!!!

  • tokens reduced from 43 down to 24 (cost savings)
  • way easier to read
  • relies on models code-writing ability
  • allows for specific assignment like in json: Tool: web_search(query="my query here")

i hope this is useful to someone out there.

if so, maybe share where you are applying it and tell us about your experience! <3

8
9
10
11
10
submitted 3 months ago* (last edited 3 months ago) by Even_Adder@lemmy.dbzer0.com to c/fosai@lemmy.world
12
13
 
 

the goal is to have an agent that can:

  • Understand a complex problem description.
  • Generate initial algorithmic solutions.
  • Rigorously test its own code.
  • Learn from failures and successes.
  • Evolve increasingly sophisticated and efficient algorithms over time.

https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/AlphaEvolve.pdf

14
15
16
17
18
19
20
21
22
23
24
25
view more: next ›