KingRandomGuy

joined 2 years ago
[–] KingRandomGuy@lemmy.world 2 points 2 days ago

Yep, since this is using Gaussian Splatting you'll need multiple camera views and an initial point cloud. You get both for free from video via COLMAP.

[–] KingRandomGuy@lemmy.world 2 points 1 week ago

Yeah, in typical Google fashion they used to have two deep learning teams: Google Brain and DeepMind. Google Brain was Google's in-house team, responsible for inventing the transformer. DeepMind focused more on RL agents than Google Brain, hence discoveries like AlphaZero and AlphaFold.

[–] KingRandomGuy@lemmy.world 2 points 1 week ago

The general framework for evolutionary methods/genetic algorithms is indeed old but it's extremely broad. What matters is how you actually mutate the algorithm being run given feedback. In this case, they're using the same framework as genetic algorithms (iteratively building up solutions by repeatedly modifying an existing attempt after receiving feedback) but they use an LLM for two things:

  1. Overall better sampling (the LLM has better heuristics for figuring out what to fix compared to handwritten techniques), meaning higher efficiency at finding a working solution.

  2. "Open set" mutations: you don't need to pre-define what changes can be made to the solution. The LLM can generate arbitrary mutations instead. In particular, AlphaEvolve can modify entire codebases as mutations, whereas prior work only modified single functions.

The "Related Work" (section 5) section of their whitepaper is probably what you're looking for, see here.

[–] KingRandomGuy@lemmy.world 2 points 1 month ago (1 children)

Unfortunately proprietary professional software suites are still usually better than their FOSS counterparts. For instance Altium Designer vs KiCAD for ECAD, and Solidworks vs FreeCAD. That's not to say the open source tools are bad. I use them myself all the time. But the proprietary tools usually are more robust (for instance, it is fairly easy to break models in FreeCAD if you aren't careful) and have better workflows for creating really complex designs.

I'll also add that Lightroom is still better than Darktable and RawTherapee for me. Both of the open source options are still good, but Lightroom has better denoising in my experience. It also is better at supporting new cameras and lenses compared to the open source options.

With time I'm sure the open source solutions will improve and catch up to the proprietary ones. KiCAD and FreeCAD are already good enough for my needs, but that may not have been true if I were working on very complex projects.

[–] KingRandomGuy@lemmy.world 2 points 1 month ago (1 children)

Cute cat! Nevermore and Bentobox are two super popular ones.

Since you're running an E3 V2, first make sure you've replaced the hotend with an all-metal design. The stock hotend has the PTFE tube routed all the way into the hotend, which is fine for low temp materials like PLA, but can result in off-gassing at higher temperatures such as those used by ASA and some variants of PETG. The PTFE particles are almost certainly not good to breathe in during the long term, and can even be deadly to certain animals such as birds at small quantities.

[–] KingRandomGuy@lemmy.world 1 points 1 month ago

In my experience doing a bit more than 10% can be helpful in the event of underextrusion, plus I've seen it add a bit more rigidity. But you're right that there are diminishing returns till you start maxing out the infill.

4 perimeters at 0.6mm or 6 at 0.4 should be fine.

[–] KingRandomGuy@lemmy.world 3 points 1 month ago (3 children)

Yeah, I agree. In the photo I didn't see an enclosure so I said PETG is fine for this application. With an enclosure you'd really want to use ABS/ASA, though PETG could work in a pinch.

I also agree that an enclosure (combined with a filter) is a good idea. I think people tend to undersell the potential dangers from 3D printing, especially for people with animals in the home.

[–] KingRandomGuy@lemmy.world 2 points 1 month ago

Thanks for the respectful discussion! I work in ML (not LLMs, but computer vision), so of course I'm biased. But I think it's understandable to dislike ML/AI stuff considering that there are unfortunately many unsavory practices taking place (potential copyright infringement, very high power consumption, etc.).

[–] KingRandomGuy@lemmy.world 2 points 1 month ago

All good, it's still something to keep in mind (especially if OP thinks about enclosing their printer in the future). Thanks for your comment!

[–] KingRandomGuy@lemmy.world 3 points 1 month ago (5 children)

IMO heat formed from stress will not be a big deal, especially considering that people frequently build machines out of PETG (Prusa's i3 variants, custom CoreXYs like Vorons and E3NG). The bigger problem is creep, which suggests that you shouldn't use PLA for this part.

[–] KingRandomGuy@lemmy.world 4 points 1 month ago* (last edited 1 month ago) (6 children)

PETG will almost certainly be fine. Just use lots of walls (6 walls, maybe 30% infill). PETG's heat resistance is more than good enough for a non-enclosed printer. Prusa has used PETG for their printer parts for a very long time without issues.

Heat isn't the issue to worry about IMO. The bigger issue is creep/cold flowing, which is permanent deformation that results even from relatively light, sustained loads. PLA has very poor creep resistance unless annealed, but PETG is a quite a bit better. ABS/ASA would be even better but they're much more of a headache to print.

[–] KingRandomGuy@lemmy.world 1 points 1 month ago

It appears like reasoning because the LLM is iterating over material that has been previously reasoned out. An LLM can’t reason through a problem that it hasn’t previously seen

This also isn't an accurate characterization IMO. LLMs and ML algorithms in general can generalize to unseen problems, even if they aren't perfect at this; for instance, you'll find that LLMs can produce commands to control robot locomotion, even on different robot types.

"Reasoning" here is based on chains of thought, where they generate intermediate steps which then helps them produce more accurate results. You can fairly argue that this isn't reasoning, but it's not like it's traversing a fixed knowledge graph or something.

 

Equipment details:

  • Mount: OpenAstroMount by OpenAstroTech
  • Lens: Sony 200-600 @ 600mm f/7.1
  • Camera: Sony A7R III
  • Guidescope: OpenAstroGuider (50mm, fl=163) by OpenAstroTech
  • Guide Camera: SVBONY SV305m Pro
  • Imaging Computer: ROCKPro64 running INDIGO server

Acquisition & Processing:

  • Imaged and Guided/Dithered in Ain Imager
  • 420x30s lights, 40 darks, 100 flats, 100 biases, 100 dark-flats over two nights
  • Prepared data and stacked in SiriLic
  • Background extraction, photometric color calibration, generalized hyperbolic stretch transform, and StarNet++ in SiriLic
  • Adjusted curves, enhanced saturation of the nebula and recombined with star mask in GIMP, desaturated and denoised background

This is my first time doing a multi-night image, and my first time using SiriLic to configure a Siril script. Any tips there would be helpful. Suggestions for improvement or any other form of constructive criticism are welcome!

33
submitted 2 years ago* (last edited 2 years ago) by KingRandomGuy@lemmy.world to c/astrophotography@lemmy.world
 

Equipment details:

  • Mount: OpenAstroMount by OpenAstroTech
  • Lens: Sony 200-600 @ 600mm f/7.1
  • Camera: Sony A7R III
  • Guidescope: OpenAstroGuider (50mm, fl=153) by OpenAstroTech
  • Guide Camera: SVBONY SV305m Pro
  • Imaging Computer: ROCKPro64 running INDIGO server

Acquisition & Processing:

  • Imaged and Guided/Dithered in Ain Imager
  • 360x30s lights, 30 darks, 30 flats, 30 biases
  • Stacked in Siril, background extraction, photometric color calibration, generalized hyperbolic stretch transform, and StarNet++
  • Enhanced saturation of the galaxy and recombined with star mask in GIMP, desaturated and denoised background

Suggestions for improvement or any other form of constructive criticism welcome!

view more: next ›