KingRandomGuy

joined 2 years ago
[–] KingRandomGuy@lemmy.world 2 points 2 weeks ago

Yeah, you can certainly get it to reproduce some pieces (or fragments) of work exactly but definitely not everything. Even a frontier LLM's weights are far too small to fully memorize most of their training data.

[–] KingRandomGuy@lemmy.world 2 points 3 weeks ago (1 children)

Most "50 MP" cameras are actually quad Bayer sensors (effectively worse resolution) and are usually binned 2x to approx 12 MP.

The lens on your phone likely isn't sharp enough to capture 50 MP of detail on a small sensor anyway, so the megapixel number ends up being more of a gimmick than anything.

[–] KingRandomGuy@lemmy.world 11 points 3 weeks ago

I agree with your thoughts. I hate what Bambu has done to the industry in terms of starting a patents arms race and encouraging other companies to reject open source, but I do love how they've pushed innovation and have made 3D printing easier for people just looking for a tool.

I hope the DIY printers like Voron, Ratrig, VzBot, and E3NG can continue the spirit of the RepRap movement.

[–] KingRandomGuy@lemmy.world 3 points 4 weeks ago

I work in an area adjacent to autonomous vehicles, and the primary reason has to do with data availability and stability of terrain. In the woods you're naturally going to have worse coverage of typical behaviors just because the set of observations is much wider ("anomalies" are more common). The terrain being less maintained also makes planning and perception much more critical. So in some sense, cities are ideal.

Some companies are specifically targeting offs road AVs, but as you can guess the primary use cases are going to be military.

[–] KingRandomGuy@lemmy.world 7 points 1 month ago

Some apps only require 'basic' play integrity verification, but now check to see if they're installed via the Play Store. They refuse to run if they're installed via an alternative source.

This has been a problem for GrapheneOS, since some apps filter themselves out of the Play Store search if you don't pass strong play integrity, despite the fact that they don't require it. Luckily Graphene now had a bypass for this.

[–] KingRandomGuy@lemmy.world 3 points 1 month ago (2 children)

OBS can use NVENC, though IIRC it needs to be built with support enabled, which may not be the case for all distros' package managers.

[–] KingRandomGuy@lemmy.world 2 points 1 month ago

Yep, since this is using Gaussian Splatting you'll need multiple camera views and an initial point cloud. You get both for free from video via COLMAP.

[–] KingRandomGuy@lemmy.world 2 points 1 month ago

Yeah, in typical Google fashion they used to have two deep learning teams: Google Brain and DeepMind. Google Brain was Google's in-house team, responsible for inventing the transformer. DeepMind focused more on RL agents than Google Brain, hence discoveries like AlphaZero and AlphaFold.

[–] KingRandomGuy@lemmy.world 2 points 1 month ago

The general framework for evolutionary methods/genetic algorithms is indeed old but it's extremely broad. What matters is how you actually mutate the algorithm being run given feedback. In this case, they're using the same framework as genetic algorithms (iteratively building up solutions by repeatedly modifying an existing attempt after receiving feedback) but they use an LLM for two things:

  1. Overall better sampling (the LLM has better heuristics for figuring out what to fix compared to handwritten techniques), meaning higher efficiency at finding a working solution.

  2. "Open set" mutations: you don't need to pre-define what changes can be made to the solution. The LLM can generate arbitrary mutations instead. In particular, AlphaEvolve can modify entire codebases as mutations, whereas prior work only modified single functions.

The "Related Work" (section 5) section of their whitepaper is probably what you're looking for, see here.

[–] KingRandomGuy@lemmy.world 2 points 2 months ago (1 children)

Unfortunately proprietary professional software suites are still usually better than their FOSS counterparts. For instance Altium Designer vs KiCAD for ECAD, and Solidworks vs FreeCAD. That's not to say the open source tools are bad. I use them myself all the time. But the proprietary tools usually are more robust (for instance, it is fairly easy to break models in FreeCAD if you aren't careful) and have better workflows for creating really complex designs.

I'll also add that Lightroom is still better than Darktable and RawTherapee for me. Both of the open source options are still good, but Lightroom has better denoising in my experience. It also is better at supporting new cameras and lenses compared to the open source options.

With time I'm sure the open source solutions will improve and catch up to the proprietary ones. KiCAD and FreeCAD are already good enough for my needs, but that may not have been true if I were working on very complex projects.

[–] KingRandomGuy@lemmy.world 2 points 3 months ago (1 children)

Cute cat! Nevermore and Bentobox are two super popular ones.

Since you're running an E3 V2, first make sure you've replaced the hotend with an all-metal design. The stock hotend has the PTFE tube routed all the way into the hotend, which is fine for low temp materials like PLA, but can result in off-gassing at higher temperatures such as those used by ASA and some variants of PETG. The PTFE particles are almost certainly not good to breathe in during the long term, and can even be deadly to certain animals such as birds at small quantities.

[–] KingRandomGuy@lemmy.world 1 points 3 months ago

In my experience doing a bit more than 10% can be helpful in the event of underextrusion, plus I've seen it add a bit more rigidity. But you're right that there are diminishing returns till you start maxing out the infill.

4 perimeters at 0.6mm or 6 at 0.4 should be fine.

 

Equipment details:

  • Mount: OpenAstroMount by OpenAstroTech
  • Lens: Sony 200-600 @ 600mm f/7.1
  • Camera: Sony A7R III
  • Guidescope: OpenAstroGuider (50mm, fl=163) by OpenAstroTech
  • Guide Camera: SVBONY SV305m Pro
  • Imaging Computer: ROCKPro64 running INDIGO server

Acquisition & Processing:

  • Imaged and Guided/Dithered in Ain Imager
  • 420x30s lights, 40 darks, 100 flats, 100 biases, 100 dark-flats over two nights
  • Prepared data and stacked in SiriLic
  • Background extraction, photometric color calibration, generalized hyperbolic stretch transform, and StarNet++ in SiriLic
  • Adjusted curves, enhanced saturation of the nebula and recombined with star mask in GIMP, desaturated and denoised background

This is my first time doing a multi-night image, and my first time using SiriLic to configure a Siril script. Any tips there would be helpful. Suggestions for improvement or any other form of constructive criticism are welcome!

33
submitted 2 years ago* (last edited 2 years ago) by KingRandomGuy@lemmy.world to c/astrophotography@lemmy.world
 

Equipment details:

  • Mount: OpenAstroMount by OpenAstroTech
  • Lens: Sony 200-600 @ 600mm f/7.1
  • Camera: Sony A7R III
  • Guidescope: OpenAstroGuider (50mm, fl=153) by OpenAstroTech
  • Guide Camera: SVBONY SV305m Pro
  • Imaging Computer: ROCKPro64 running INDIGO server

Acquisition & Processing:

  • Imaged and Guided/Dithered in Ain Imager
  • 360x30s lights, 30 darks, 30 flats, 30 biases
  • Stacked in Siril, background extraction, photometric color calibration, generalized hyperbolic stretch transform, and StarNet++
  • Enhanced saturation of the galaxy and recombined with star mask in GIMP, desaturated and denoised background

Suggestions for improvement or any other form of constructive criticism welcome!

view more: next ›