Smokeydope

joined 2 years ago
MODERATOR OF
[โ€“] Smokeydope@lemmy.world 4 points 8 hours ago (3 children)

What does an MCP server do?

14
Homelab upgrade WIP (infosec.pub)
submitted 11 hours ago* (last edited 9 hours ago) by Smokeydope@lemmy.world to c/localllama@sh.itjust.works
 

Theres a lot more to this stuff than I thought there would be when starting out. I spent the day familiarizing with how to take apart my pc and swap gpus .Trying to piece everything together.

Apparently in order for PC to startup right it needs a graphical driver. I thought the existance of a HDMI port on the motherboard implied the existance of onboard graphics but apparently only special CPUs have that capability. My ryzen 5 2600 doesnt. The p100 Tesla does not have graphical display capabilities. So ive hit a snag where the PC isnt starting up due to not finding a graphical interface output.

I'm going to try to run multiple GPU cards together on pcie. Hope I can mix amd Rx 580 and nvidia tesla on same board fingers crossed please work.

My motherboard thankfully supports 4x4x4x4 pcie x16 bifurcation which isa very lucky break I didnt know going into this ๐Ÿ™

Strangely other configs for splitting 16x lanes like 8x8 or 8x4x4 arent in my bios for some reason? So I'm planning to get a 4x bifurcstion board and plug both cards in and hope that the amd one is recognized!

According to one source The performance loss for using 4x lanes for GPUs doing the compute i m doing is 10-15 % surprisingly tolerable actually.

I never really had to think about how pcie lanes work or how to allocate them properly before.

For now I'm using two power supplies one built into the desktop and the new 850e corsair psu. I choose this one as it should work with 2-3 GPUs while being in my price range.

Also the new 12v-2x6 port supports like 600w enough for the tesla and comes with a dual pcie split which was required for the power cable adapter for Tesla. so it all worked out nicely for a clean wire solution.

Sadly I fucked up a little. The pcie release press plastic thing on the motherboard was brittle and I fat thumbed it too hard while having problems removing the GPU initially so it snapped off. I dont know if that's something fixable. It doesnt seem to affect the security of the connection too bad fortunately. I intend to grt a pcie riser extensions cable so there won't be much force on the now slightly loosened pcieconnection. Ill have the gpu and bifurcation board layed out nicely on the homelab table while testing, get them mounted somewhere nicely once I get it all working.

I need to figure out a external GPU mount system. I see people use server racks or nut and bolt meta chassis. I could get a thin plate of copper the size of the desktops glass window as a base/heatsink?

4
Homelab upgrade WIP (infosec.pub)
submitted 11 hours ago* (last edited 9 hours ago) by Smokeydope@lemmy.world to c/buildapc@lemmy.world
 

Theres a lot more to this stuff than I thought there would be when starting out. I spent the day familiarizing with how to take apart my pc and swap gpus .Trying to piece everything together.

Apparently in order for PC to startup right it needs a graphical driver. I thought the existance of a HDMI port on the motherboard implied the existance of onboard graphics but apparently only special CPUs have that capability. My ryzen 5 2600 doesnt. The p100 Tesla does not have graphical display capabilities. So ive hit a snag where the PC isnt starting up due to not finding a graphical interface output.

I'm going to try to run multiple GPU cards together on pcie. Hope I can mix amd Rx 580 and nvidia tesla on same board fingers crossed please work.

My motherboard thankfully supports 4x4x4x4 pcie x16 bifurcation which isa very lucky break I didnt know going into this ๐Ÿ™

Strangely other configs for splitting 16x lanes like 8x8 or 8x4x4 arent in my bios for some reason? So I'm planning to get a 4x bifurcstion board and plug both cards in and hope that the amd one is recognized!

According to one source The performance loss for using 4x lanes for GPUs doing the compute i m doing is 10-15 % surprisingly tolerable actually.

I never really had to think about how pcie lanes work or how to allocate them properly before.

For now I'm using two power supplies one built into the desktop and the new 850e corsair psu. I choose this one as it should work with 2-3 GPUs while being in my price range.

Also the new 12v-2x6 port supports like 600w enough for the tesla and comes with a dual pcie split which was required for the power cable adapter for Tesla. so it all worked out nicely for a clean wire solution.

Sadly I fucked up a little. The pcie release press plastic thing on the motherboard was brittle and I fat thumbed it too hard while having problems removing the GPU initially so it snapped off. I dont know if that's something fixable. It doesnt seem to affect the security of the connection too bad fortunately. I intend to grt a pcie riser extensions cable so there won't be much force on the now slightly loosened pcieconnection. Ill have the gpu and bifurcation board layed out nicely on the homelab table while testing, get them mounted somewhere nicely once I get it all working.

I need to figure out a external GPU mount system. I see people use server racks or nut and bolt meta chassis. I could get a thin plate of copper the size of the desktops glass window as a base/heatsink?

[โ€“] Smokeydope@lemmy.world 6 points 1 day ago* (last edited 1 day ago)

The point in time after the first qbit based supercomputers transitioned from theoretical abstraction to physical proven reality. Thus opening up the can-of-worms of feasabily cracking classical cryptographic encryptions like an egg within human acceptable time frames instead of longer-than-the-universes-lifespan timeframes.. Thanks, superposition probability based parallel computations.

[โ€“] Smokeydope@lemmy.world 4 points 2 days ago

Coincidentally the same name as my geometry themed experimental grunge rock band

[โ€“] Smokeydope@lemmy.world 17 points 2 days ago* (last edited 2 days ago)

The first thought I had was this same scenario but all grown up. Imagine two fully grown 700 lb bovines crammed in your kitchen staring down your dishes lol theyre all cute until they become living flesh tanks then they're still cute but hella bulky and slightly intimidating

[โ€“] Smokeydope@lemmy.world 17 points 2 days ago* (last edited 2 days ago)

nods and continues to use original doom wads with the red cross design for health pickups because the green one from BFG editions look like shit

[โ€“] Smokeydope@lemmy.world 3 points 2 days ago* (last edited 2 days ago)

Being an alternate protocol nerd is a trip. Most people have no clue what a gopher/Gemini/spartan/finger is or how they differ from the web. The few handful on this planet that do are just other nerds who like to blogspam tech nerd things. It would be nice if the web enshittified so much even the average non techie was put into a position to look into these alternatives.

[โ€“] Smokeydope@lemmy.world 2 points 2 days ago

Right now THCA mail-order is under fire from goons in house and senate so if your gonna order on bulk legally may want to do it soon the lawmaking could go either way. I recommend eight horse hemp for cheap mid bulk and wnc-cbd for the top shelf premium

 

I now do some work with computers that involves making graphics cards do computational work on a headless server. The computational work it does has nothing to do with graphics.

The name is more for consumers based off the most common use for graphics cards and why they were first made in the 90s but now they're used for all sorts of computational workloads. So what are some more fitting names for the part?

I now think of them as 'computation engines' analagous to a old car engine. Its where the computational horsepower is really generated. But how would ram make sense in this analogy?

[โ€“] Smokeydope@lemmy.world 1 points 3 days ago* (last edited 3 days ago) (1 children)

Have you by chance checked out kobold.cpp lite webUI? It allows some of what your asking for like RAG for worldbuilding, adding images for the llm to describe to add into the story, easy editing of input and output, lots of customization in settings. I have a public instance of kobold webui setup on my website and I'm cool with allowing fellow hobbyist using my compute to experiment with things. If your interested in trying it out to see if its more what youre looking for, feel free to send me a pm and I'll send you the address and a api key/password.

[โ€“] Smokeydope@lemmy.world 1 points 3 days ago* (last edited 3 days ago) (3 children)

In an ideal work what exactly would you want an AI integrated text editor to do? Depending on what you need to have happen in your workflow you can automate copy pasting and automatic output logging with python scripts and your engines api.

Editing and audiing stories isnt that much different from auditing codebases. It all boils down to the understanding and correct use of language to convey abstraction. I bet tweaking some agebic personalities and goals in vscode+roo could get you somewhere

3
submitted 5 days ago* (last edited 5 days ago) by Smokeydope@lemmy.world to c/vaporents@lemmy.world
 

Bonus:

[โ€“] Smokeydope@lemmy.world 2 points 6 days ago (1 children)

Yesss lol ๐Ÿซ ๐Ÿ˜ตโ€๐Ÿ’ซ๐Ÿคญ

9
submitted 6 days ago* (last edited 6 days ago) by Smokeydope@lemmy.world to c/vaporents@lemmy.world
 

Previous tip:

I got the ti helix tip sunset color to finally complete my H2. Wow is this thing a huge improvent in several ways. Airflow is greatly increased which I immediately noticed. The m+ tip was restrictive comparatively.

The speed to heat with induction heater is faster fue to lower thermal mass of titanium and the helix being machined to be the lightest weight wise. This means the tip allows for more efficient use of energy for more heat cycles per battery charge.

I like the helical twisty swirlies. The color was purple not reddish orange so I was surprised and thought they sent the wrong thing! It seems any amount of finger oil turns it from light purple to sunset tan orange. I think that's bitching I love color changing stuff and purple is my favorite color so I was happy regardless.

The holes didnt have noticsble burrs thats what I was worried about. It was machined to within my expectations, ran it through some ultrasonic alcohol anyways before use as is good practice.

18
submitted 1 week ago* (last edited 1 week ago) by Smokeydope@lemmy.world to c/metaldetecting@lemmy.world
 

This is one of my favorite finds. I believe its a gold plated half of some kind of locket or pendant.

Dont get discouraged because you aren't pulling precious metals and valuable loot out of the ground most of the time. The chance is always there. Keep at it!

 

It seems mistral finally released their own version of a small 3.1 2503 with CoT reasoning pattern embedding. Before this the best CoT finetune of Small was DeepHermes with deepseeks r1 distill patterns. According to the technical report, mistral baked their own reasoning patterns for this one so its not just another deepseek distill finetune.

HuggingFace

Blog

Magistral technical research academic paper

 

Setting up a personal site on local hardware has been on my bucket list for along time. I finally bit he bullet and got a basic website running with apache on a Ubuntu based linux distro. I bought a domain name, linked it up to my l ip got SSL via lets encrypt for https and added some header rules until security headers and Mozilla observatory gave it a perfect score.

Am I basically in the clear? What more do I need to do to protect my site and local network? I'm so scared of hackers and shit I do not want to be an easy target.

I would like to make a page about the hardware its running on since I intend to have it be entirely ran off solar power like solar.lowtechmagazine and wanted to share technical specifics. But I heard somewhere that revealing the internal state of your server is a bad idea since it can make exploits easier to find. Am I being stupid for wanting to share details like computer model and software running it?

 

Setting up a personal site on local hardware has been on my bucket list for along time. I finally bit he bullet and got a basic website running with apache on a Ubuntu based linux distro. I bought a domain name, linked it up to my l ip got SSL via lets encrypt for https and added some header rules until security headers and Mozilla observatory gave it a perfect score.

Am I basically in the clear? What more do I need to do to protect my site and local network? I'm so scared of hackers and shit I do not want to be an easy target.

I would like to make a page about the hardware its running on since I intend to have it be entirely ran off solar power like solar.lowtechmagazine and wanted to share technical specifics. But I heard somewhere that revealing the internal state of your server is a bad idea since it can make exploits easier to find. Am I being stupid for wanting to share details like computer model and software running it?

 

Hello. Our community, c/localllama, has always been and continues to be a safe haven for those who wish to learn about the creation and local usage of 'artificial intelligence' machine learning models to enrich their daily lives and provide a fun hobby to dabble in. We come together to apply this new computational technology in ways that protect our privacy and build upon a collective effort to better understand how this can help humanity as an open source technology stack.

Unfortunately, we have been recieving an uptick in negative interactions by those outside our community recently. This is largely due to the current political tensions caused by our association with the popular and powerful tech companies who pioneered modern machine learning models for buisiness and profit, as well as unsavory techbro individuals who care more about money than ethics. These users of models continue to create animosity for the entire field of machine learning and all associated through their illegal stealing of private data to train base models and very real threats to disrupt the economy by destroying jobs through automation.

There are legitimate criticisms to be had. The cost in creating models, how the art they produce is devoid of the soulful touch of human creativity, and how corporations are attempting to disrupt lives for profit instead of enrich them.

I did not want to be heavy handed with censorship/mod actions prior to this post because I believe that echo chambers are bad and genuine understanding requires discussion between multiple conflicting perspectives.

However, a lot of these negative comments we receive lately aren't made in good faith with valid criticisms against the corporations or technologies used with an intimate understanding of them. No, instead its base level mud slinging by people with emotionally charged vendettas making nasty comments of no substance. Common examples are comparing models to NFTs, namecalling our community members as blind zelots for thinking models could ever be used to help people, and spreading misinformation with cherry picked unreliable sources to manipulatively exaggerate enviromental impact/resource consumption used.

While I am against echo chambers, I am also against our community being harassed and dragged down by bad actors who just don't understand what we do or how this works. You guys shouldn't have to be subjected to the same brain rot antagonism with every post made here.

So Im updating guidelines by adding some rules I intend to enforce. Im still debating whether or not to retroactively remove infringing comments from previous post, but be sure any new post and comments made will be enforced based on the following guidelines.

RULES: Rule: No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Reason: More or less self explanatory, personal character attacks and childish mudslinging against community members are toxic.

Rule: No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Reason: This is a piss poor whataboutism argument. It claims something that is blaitantly untrue while attempting to discredit the entire field by stapling the animosity everyone has with crypto/NFT onto ML. Models already do more than cryptocurrency ever has. Models can generate text, pictures, audio. Models can view/read/hear text, pictures, and audio. Models may simulate aspects of cognitive thought patterns to attempt to speculate or reason through a given problem. Once they are trained they can be copied and locally hosted for many thousands of years which factors into initial energy cost vs power consumed over time equations.

Rule: No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Reason: There are grains of truth to the reductionist statement that llms rely on mathematical statistics and probability for their outputs. The same can be said for humans and the statistical patterns in our own language and how our neurons come together to predict the next word in the sentence we type out. Its the intricate complexity in the process and the way information is processed that makes all the diffence. ML models have an entire college course worth of advanced mathematics and STEM concepts to create hyperdimensional matrixes to plot the relationship of information, intricate hidden translation layers made of perceptrons connecting billions of parameters into vast abstraction mappings. There were also some major innovations and discoveries made in the 2000s which made modern model training possible that we didn't have in the early days of computing. all of that is a little more complicated than what your phones autocorrect does, and the people who make the lazy reductionist comparison just dont care about the nuances.

Rule: No implying that models are devoid of purpose or potential for enriching peoples lives.

Reason: Models are tools with great potential for helping people through the creation of accessability software for the disabled and enabling doctors to better heal the sick through advanced medical diagnostic techniques. The percieved harm models are capable of causing such as job displacement is rooted in our flawed late stage capitalist human society pressures for increased profit margins at the expense of everyone and everything.

If you have any proposals for rule additions or wording changes I will hear you out in the comments. Thank you for choosing to browse and contribute to this space.

 

WOAH

 

So its been almost 10 years since i've swapped computer parts and I am nervous about this. Ive never done any homelab type thing involving big powerful parts, just dealt with average mid range consumer class parts in standard desktop cases.

I do computational work now and want to convert a desktop pc into a headless server with a beefy GPU. I bit the bullet and ordered a used P100 tesla 16gb. Based on what im reading, a new PSU may be in order as well if nothing else. I havent actually read labels yet but online info on the desktop model indicates its probably around a 450~ watt PSU.

The P100 power draw is rated at 250 W maximum. The card im using now draws 185 W maximum. Im reading that 600W would be better for just-in-case overhead. I plan to get this 700W which I hope is enough overhead to cover an extra GPU if I want to take advantage of nvidia CUDA with the 1070ti in my other desktop.

How much does the rest of the system use on average with a ryzen 5 2600 six core in a m4 motherboard and like 16gb ddr4 ram?

When I read up on powering the P100 though I stumbled across this reddit post of someone confused how to get it to connect to a regular consumer corsehair PSU. Apparently the p100 uses a CPU power cable instead of a PCIE one? But you cant use the regular cpu power output from the PSU. Acording to the post, people buy adapter cables with two input gpu cables to one output cpu cable for these cards.

Can you please help me with a sanity check and to understand what i've gotten myself into? I don't exactly understand what im supposed to do with those adapter cables. Do modern PSUs come with multiple GPU power outputs/outlets from the interface these days and I need to run two parallel lines into that adapter?

Thank you all for your help on the last post im deeply grateful for all the input ive gotten here. Ill do my best not to spam post with my tech concerns but this one has me really worried.

view more: next โ€บ