yuu

joined 2 years ago
MODERATOR OF
[–] yuu@group.lt 5 points 2 years ago* (last edited 2 years ago)

oh this is one of my wallpapers

i did a 1920x1080 version out of it by horizontally tiling 3 duplicates of it like this (i got the freely licensed version from wikimedia commons under https://creativecommons.org/licenses/by-sa/4.0/deed.en)

Observable_Universe_Logarithmic_Map_%28horizontal_layout_english_annotations%29.x1080-tiled.png

[–] yuu@group.lt 11 points 2 years ago

just use a community-lead or non-profit foundation lead distro: NixOS (better than silverblue/kinoite in all aspects they try to sell), Arch, or Debian.

For professional usage, you generally go Ubuntu, or some RHEL derivative.

 

For the first time ever in space, scientists discovered a novel carbon molecule known as methyl cation (CH3+). This molecule is significant because it promotes the synthesis of more complex carbon-based compounds.

Orion Nebula's Orion Bar

 

Originally posted on https://emacs.ch/@yantar92/110571114222626270

Please help collecting statistics to optimize Emacs GC defaults

Many of us know that Emacs defaults for garbage collection are rather ancient and often cause singificant slowdowns. However, it is hard to know which alternative defaults will be better.

Emacs devs need help from users to obtain real-world data about Emacs garbage collection. See the discussion in https://yhetil.org/emacs-devel/87v8j6t3i9.fsf@localhost/

Please install https://elpa.gnu.org/packages/emacs-gc-stats.html and send the generated statistics via email to emacs-gc-stats@gnu.org after several weeks.

 

Early galaxies' stars allowed light to travel freely by heating and ionizing intergalactic gas, clearing vast regions around them.

Cave divers equipped with brilliant headlamps often explore cavities in rock less than a mile beneath our feet. It’s easy to be wholly unaware of these cave systems – even if you sit in a meadow above them – because the rock between you and the spelunkers prevents light from their headlamps from disturbing the idyllic afternoon.

Apply this vision to the conditions in the early universe, but switch from a focus on rock to gas. Only a few hundred million years after the big bang, the cosmos was brimming with opaque hydrogen gas that trapped light at some wavelengths from stars and galaxies. Over the first billion years, the gas became fully transparent – allowing the light to travel freely. Researchers have long sought definitive evidence to explain this flip.

New data from the James Webb Space Telescope recently pinpointed the answer using a set of galaxies that existed when the universe was only 900 million years old. Stars in these galaxies emitted enough light to ionize and heat the gas around them, forming huge, transparent “bubbles.” Eventually, those bubbles met and merged, leading to today’s clear and expansive views.

More: https://eiger-jwst.github.io/index.html

[–] yuu@group.lt 1 points 2 years ago* (last edited 2 years ago)

it’s simply too difficult for average person to login and apply to every single instance that they’re interested in

Maybe there are some misunderstanding, but why would one want to apply to multiple instances at the same time? Just applying and being active on 1 good instance isn't enough?

[–] yuu@group.lt 3 points 2 years ago

Totally supportive. Great to have a wayland Rust implementation (and Rust increasing adoption by FOSS community); more specifically, smithay, which further than System76 is building upon, like projects by the community this WM for example https://github.com/MagmaWM/MagmaWM

[–] yuu@group.lt 1 points 2 years ago* (last edited 2 years ago)

Well; darwin users, just as linux users, should also work on making packages available to their platforms as Nix is still in its adoption phase. There are many already. IIRC I, who never use MacOS, made some effort into making 1 or 2 packages (likely more) to build on darwin.

[–] yuu@group.lt 1 points 2 years ago* (last edited 2 years ago)

as Reddit now going to IPO. That happened to Twitter->Mastodon, can happen to Reddit->Lemmy as well.

We had seen it coming haha

[–] yuu@group.lt 6 points 2 years ago* (last edited 2 years ago)

I can keep Firefox bleeding edge without having to worry that the package manager is also going to update the base system, giving me a broken next boot if I run rolling releases.

On Nix[OS], one can use multiple base Nixpkgs versions for specific packages one wants. What I have is e.g. 2 flakes nixpkgs, and nixpkgs-update. The first includes most packages including base system that I do not want to update regularly, while the last is for packages that I want to update more regularly like Web browser (security reasons, etc).

e.g.

[–] yuu@group.lt 11 points 2 years ago* (last edited 2 years ago) (2 children)

When I was packaging Flatpaks, the greatest downside is

No built in package manager

There is a repo with shared dependencies, but it is very few. So needs to package all the dependencies... So, I personally am not interested in packaging for flatpak other than in very rare occasions... Nix and Guix are definitely better solutions (except the isolation aspect, which is not a feature, you need to do it manually), and one can use at many distros; Nix even on MacOS!

[–] yuu@group.lt 10 points 2 years ago (2 children)

Some of them will detect if using virtualization. For example http://safeexambrowser.org/ by ETH Zurich

Ironically enough, it is free software https://github.com/SafeExamBrowser

 

The nature of an ultra-faint galaxy in the cosmic Dark Ages seen with JWST https://arxiv.org/abs/2210.15639

[–] yuu@group.lt 8 points 2 years ago* (last edited 2 years ago)

their work essentially go in the trash

They learned a lot in the process probably, that is the most important for them after all. But relying on API is risky, so always go HTML scrapping. The frontends are super useful for finding information already there without accessing the actual website. Always use Lemmy here for everything else.

 

I suppose it only makes sense to raise awareness on the benefits of the freely licensed software and services from the fediverse over the dangerous and unethical proprietary services in existence such as Reddit now going to IPO. That happened to Twitter->Mastodon, can happen to Reddit->Lemmy as well.

I suppose as well that the users most likely to be open to the idea would be the free software, culture users to try it. Besides, an effort on content creation and content creators to make it an attractive place.

What are your thoughts? What were the efforts so far? What are the challenges? Is it so hard to make people migrate?

 

This project aims at providing nightly builds of all official rust mdbooks in epub format. It is born out of the difficulty I encountered when starting my rust apprenticeship to find recent ebook versions of the official documentation.

If you encounter any issue, have any suggestion or would like to improve this site and/or its content, please go to https://github.com/dieterplex/rust-ebookshelf/ and file an issue or create a pull request.

 

cross-posted from !softwareengineering@group.lt: https://group.lt/post/46385

Adopting DevOps practices is nowadays a recurring task in the industry. DevOps is a set of practices intended to reduce the friction between the software development (Dev) and the IT operations (Ops), resulting in higher quality software and a shorter development lifecycle. Even though many resources are talking about DevOps practices, they are often inconsistent with each other on the best DevOps practices. Furthermore, they lack the needed detail and structure for beginners to the DevOps field to quickly understand them.

In order to tackle this issue, this paper proposes four foundational DevOps patterns: Version Control Everything, Continuous Integration, Deployment Automation, and Monitoring. The patterns are both detailed enough and structured to be easily reused by practitioners and flexible enough to accommodate different needs and quirks that might arise from their actual usage context. Furthermore, the patterns are tuned to the DevOps principle of Continuous Improvement by containing metrics so that practitioners can improve their pattern implementations.


The article does not describes but actually identified and included 2 other patterns in addition to the four above (so actually 6):

  • Cloud Infrastructure, which includes cloud computing, scaling, infrastructure as a code, ...
  • Pipeline, "important for implementing Deployment Automation and Continuous Integration, and segregating it from the others allows us to make the solutions of these patterns easier to use, namely in contexts where a pipeline does not need to be present."

Overview of the pattern candidates and their relation

The paper is interesting for the following structure in describing the patterns:

  • Name: An evocative name for the pattern.
  • Context: Contains the context for the pattern providing a background for the problem.
  • Problem: A question representing the problem that the pattern intends to solve.
  • Forces: A list of forces that the solution must balance out.
  • Solution: A detailed description of the solution for our pattern’s problem.
  • Consequences: The implications, advantages and trade-offs caused by using the pattern.
  • Related Patterns: Patterns which are connected somehow to the one being described.
  • Metrics: A set of metrics to measure the effectiveness of the pattern’s solution implementation.
 

cross-posted from !softwareengineering@group.lt: https://group.lt/post/46385

Adopting DevOps practices is nowadays a recurring task in the industry. DevOps is a set of practices intended to reduce the friction between the software development (Dev) and the IT operations (Ops), resulting in higher quality software and a shorter development lifecycle. Even though many resources are talking about DevOps practices, they are often inconsistent with each other on the best DevOps practices. Furthermore, they lack the needed detail and structure for beginners to the DevOps field to quickly understand them.

In order to tackle this issue, this paper proposes four foundational DevOps patterns: Version Control Everything, Continuous Integration, Deployment Automation, and Monitoring. The patterns are both detailed enough and structured to be easily reused by practitioners and flexible enough to accommodate different needs and quirks that might arise from their actual usage context. Furthermore, the patterns are tuned to the DevOps principle of Continuous Improvement by containing metrics so that practitioners can improve their pattern implementations.


The article does not describes but actually identified and included 2 other patterns in addition to the four above (so actually 6):

  • Cloud Infrastructure, which includes cloud computing, scaling, infrastructure as a code, ...
  • Pipeline, "important for implementing Deployment Automation and Continuous Integration, and segregating it from the others allows us to make the solutions of these patterns easier to use, namely in contexts where a pipeline does not need to be present."

Overview of the pattern candidates and their relation

The paper is interesting for the following structure in describing the patterns:

  • Name: An evocative name for the pattern.
  • Context: Contains the context for the pattern providing a background for the problem.
  • Problem: A question representing the problem that the pattern intends to solve.
  • Forces: A list of forces that the solution must balance out.
  • Solution: A detailed description of the solution for our pattern’s problem.
  • Consequences: The implications, advantages and trade-offs caused by using the pattern.
  • Related Patterns: Patterns which are connected somehow to the one being described.
  • Metrics: A set of metrics to measure the effectiveness of the pattern’s solution implementation.
 

cross-posted from: https://group.lt/post/46053

A group of astronomers poring over data from the James Webb Space Telescope (JWST) has glimpsed light from ionized helium in a distant galaxy, which could indicate the presence of the universe’s very first generation of stars.

These long-sought, inaptly named “Population III” stars would have been ginormous balls of hydrogen and helium sculpted from the universe’s primordial gas. Theorists started imagining these first fireballs in the 1970s, hypothesizing that, after short lifetimes, they exploded as supernovas, forging heavier elements and spewing them into the cosmos. That star stuff later gave rise to Population II stars more abundant in heavy elements, then even richer Population I stars like our sun, as well as planets, asteroids, comets and eventually life itself.

About 400,000 years after the Big Bang, electrons, protons and neutrons settled down enough to combine into hydrogen and helium atoms. As the temperature kept dropping, dark matter gradually clumped up, pulling the atoms with it. Inside the clumps, hydrogen and helium were squashed by gravity, condensing into enormous balls of gas until, once the balls were dense enough, nuclear fusion suddenly ignited in their centers. The first stars were born.

stars in our galaxy into types I and II in 1944. The former includes our sun and other metal-rich stars; the latter contains older stars made of lighter elements. The idea of Population III stars entered the literature decades later... Their heat or explosions could have reionized the universe

A color-composite NIRCam image of the RXJ2129 galaxy cluster.

More information:

 

cross-posted from: https://group.lt/post/44860

Developers across government and industry should commit to using memory safe languages for new products and tools, and identify the most critical libraries and packages to shift to memory safe languages, according to a study from Consumer Reports.

The US nonprofit, which is known for testing consumer products, asked what steps can be taken to help usher in "memory safe" languages, like Rust, over options such as C and C++. Consumer Reports said it wanted to address "industry-wide threats that cannot be solved through user behavior or even consumer choice" and it identified "memory unsafety" as one such issue. 

The report, Future of Memory Safety, looks at range of issues, including challenges in building memory safe language adoption within universities, levels of distrust for memory safe languages, introducing memory safe languages to code bases written in other languages, and also incentives and public accountability.

More information:

 

cross-posted from c/softwareengineering@group.lt: https://group.lt/post/44632

This kind of scaling issue is new to Codeberg (a nonprofit free software project), but not to the world. All projects on earth likely went through this at a certain point or will experience it in the future.

When people like me talk about scaling... It's about increasing computing power, distributed storage, replicated databases and so on. There are all kinds of technology available to solve scaling issues. So why, damn, is Codeberg still having performance issues from time to time?

...we face the "worst" kind of scaling issue in my perception. That is, if you don't see it coming (e.g. because the software gets slower day by day, or because you see how the storage pool fill up). Instead, it appears out of the blue.

The hardest scaling issue is: scaling human power.

Configuration, Investigation, Maintenance, User Support, Communication – all require some effort, and it's not easy to automate. In many cases, automation would consume even more human resources to set up than we have.

There are no paid night shifts, not even payment at all. Still, people have become used to the always-available guarantees, and demand the same from us: Occasional slowness in the evening of the CET timezone? Unbearable!

I do understand the demand. We definitely aim for a better service than we sometimes provide. However, sometimes, the frustration of angry social-media-guys carries me away...

two primary blockers that prevent scaling human resources. The first one is: trust. Because we can't yet afford hiring employees that work on tasks for a defined amount of time, work naturally has to be distributed over many volunteers with limited time commitment... second problem is a in part technical. Unlike major players, which have nearly unlimited resources available to meet high demand, scaling Codeberg's systems...

TLDR: sustainability issues for scaling because Codeberg is a nonprofit with much limited resources, mainly human resources, in face of high demand. Non-paid volunteers do all the work. So needs more people working as volunteers, and needs more money.

[–] yuu@group.lt 1 points 2 years ago* (last edited 2 years ago)

Yes. Merge both or redirect one to another. Seems atemu is not active here, but I think I have saw someone by that name in some nix official channel like matrix or discourse or nixpkgs repository. You could ask Lemmy admin as well

view more: next ›