loveknight

joined 6 months ago
[–] loveknight@programming.dev 4 points 4 months ago

"(...) Dr. Stallman notes that he cannot comment much about technical aspects of Rust, but he remains concerned (for a year already) about the trademark aspects. He is still receiving no clarification or assurances on the matter. Previously he suggested forking it and calling it something like "crust" (in a talk or a session he did with several Brazilian hackers). " (via)

[–] loveknight@programming.dev 2 points 4 months ago* (last edited 4 months ago)

How will we stave off ecosystem takeover if not by taking its early signs seriously? At the start of every case of "Stallman Was Right" was a lot of presumption that, in the eyes of many, did not make a solid conclusion.

[–] loveknight@programming.dev 26 points 4 months ago

They really want people to stop talking about the Epstein files, huh?

[–] loveknight@programming.dev 8 points 4 months ago* (last edited 4 months ago)

Agree on everything. (As for the off-putting statements about 'Rust people': Since the article was published on March 19, I wonder if much of it, revolving around what the author saw as indications of authoritarianism, came from heavy disquiet in the face of authoritarianism's recent gaining hold of the White house. I'd even consider it likely that people who post on Techrights have an above-average sensitivity for this kind of thing. It could be that the author has since arrived at a more differentiated and just view. Of note, since the time of his writing, the Rust project did remedy things that he criticized about their website.)

[–] loveknight@programming.dev 18 points 4 months ago

We've been warned. (And unsurprisingly, Roy Schestowitz is being bomarbed by Microsofters with a chain of SLAPP suits.)

 

cross-posted from: https://programming.dev/post/35495679

Earlier post version: image/text.

From another article referenced there:

The maintainers of the Ubuntu Linux distribution are now rewriting GNU Coreutils in Rust. Instead of using the GPLv3 license, which is designed to make sure that the freedoms and rights of the user of the program are preserved and always respected over everything else, the new version is going to be released using the very permissible or "permissive" (non-reciprocal) MIT license, which allows creating proprietary closed-source forks of the program.

There will surely be small incompatibilities - either intentional or accidental - between the Rust rewrite of coreutils and the GNU/C version. If the Rust version becomes popular - and it probably will, if Ubuntu starts using it - the Rust people will start pushing their own versions of higher level programs that are only compatible with the Rust version of coreutils. They will most probably also spam commits to already existing programs making them incompatible with the GNU/C version of coreutils. That way either everyone will be forced into using the MIT-licensed Rust version of coreutils, or the Linux userland becomes even more broken than it already is because now we have again two incompatible sets of runtime functions that conflict with one another. Either way, both outcomes benefit the corporations that produce proprietary software.

(Source – which does contain some more-than-problematic language outside of these passages, compare the valid objections raised by others in the cross-posts.)

Compare also how leaders of Canonical/Ubuntu have ties to Microsoft, and how the Canonical employee who leads the push to rewrite coreutils as non-GPL-licensed Rust software has spent years working for the British Army, where he "Architected and built multiple high-end bespoke Electronic Surveillance capabilities", by his own proud admission.

 

Earlier post version: image/text.

From another article referenced there:

The maintainers of the Ubuntu Linux distribution are now rewriting GNU Coreutils in Rust. Instead of using the GPLv3 license, which is designed to make sure that the freedoms and rights of the user of the program are preserved and always respected over everything else, the new version is going to be released using the very permissible or "permissive" (non-reciprocal) MIT license, which allows creating proprietary closed-source forks of the program.

There will surely be small incompatibilities - either intentional or accidental - between the Rust rewrite of coreutils and the GNU/C version. If the Rust version becomes popular - and it probably will, if Ubuntu starts using it - the Rust people will start pushing their own versions of higher level programs that are only compatible with the Rust version of coreutils. They will most probably also spam commits to already existing programs making them incompatible with the GNU/C version of coreutils. That way either everyone will be forced into using the MIT-licensed Rust version of coreutils, or the Linux userland becomes even more broken than it already is because now we have again two incompatible sets of runtime functions that conflict with one another. Either way, both outcomes benefit the corporations that produce proprietary software.

(Source – which does contain some more-than-problematic language outside of these passages, compare the valid objections raised by others in the cross-posts.)

Compare also how leaders of Canonical/Ubuntu have ties to Microsoft, and how the Canonical employee who leads the push to rewrite coreutils as non-GPL-licensed Rust software has spent years working for the British Army, where he "Architected and built multiple high-end bespoke Electronic Surveillance capabilities", by his own proud admission.

[–] loveknight@programming.dev 20 points 4 months ago* (last edited 4 months ago) (1 children)

It isn't a question of "How long are they supposed to support it for"; it's a matter of "Don't artificially break things".

As to Linux distro EOLs, they're are bad examples for several reasons:

    1. Linux distros are being provided to us for free – Never look a gift horse in the mouth.
    1. Linux distro EOLs are generally a very different beast than a Windows EOL: They change your user experience and may break some beloved software, but they generally don't make core hardware components unusable, let alone entire computers.
    1. When the Linux kernel does discontinue support for some very old hardware, we still have the source code of the last version available and are free to build some continuation. When your Windows updates end, you're left with nothing. And that's not just a theoretical option (which, however, is important enough in itself!): Only in the case of 35-year old hardware is it unlikely that people would actually do that work (on the kernel and all the relevant higher-level software). If – by contrast – the Linux kernel team would for no good reason stop supporting hardware that's a mere 10 years old, you betcha there would be people starting work to fill in the void (starting with current kernel devs who don't agree with that decision). Why? Because that's what Linux community is doing right now and has been doing for decades – keeping up support for hardware way older than 10 years.
    1. Linux developers are credible when they say that a decision to drop support for some old thing is because continuation would be to much work. Sure, also for Windows 10, economic unfeasability of further maintenance might have been the reason why they discontinued it. However, over the course of years and decades, Microsoft has given us countless well-documented reasons to suspect that their decision here is not because they have, to their own displeasure, concluded that the burden of continued support has become too heavy, but because they've spotted some new way to make money and/or reinforce their market dominance in various segments, to which people's ability to stick with their current systems is an impediment. Since people not having a TPM2 on their computers is extremely unlikely to require much additional effort on Microsoft's side to keep them supported, this is all the more likely to be the case, and that's what the plaintiff's claim is.
[–] loveknight@programming.dev 1 points 4 months ago (1 children)

"The only requirement is that you share your progress and log your hours." So participants are free to choose how they log their hours?

 

It's currently in its third edition, published November 2024.

ISBN-10: 0-13-817218-8

ISBN-13: 978-0-13-817218-3

I've discovered it (in its second edition) in my local library just yesterday. Even what little I've read so far has significantly improved my understanding, e.g. about decorators.

The testimonials for second edition are really something:

“I have been recommending this book enthusiastically since the first edition appeared in 2015. This new edition, updated and expanded for Python 3, is a treasure trove of practical Python programming wisdom that can benefit pro- grammers of all experience levels.”

—Wes McKinney, Creator of Python Pandas project, Director of Ursa Labs

“If you’re coming from another language, this is your definitive guide to taking full advantage of the unique features Python has to offer. I’ve been working with Python for nearly twenty years and I still learned a bunch of useful tricks, especially around newer features introduced by Python 3. Effective Python is crammed with actionable advice, and really helps define what our community means when they talk about Pythonic code.”

—Simon Willison, Co-creator of Django

“Now that Python 3 has finally become the standard version of Python, it’s already gone through eight minor releases and a lot of new features have been added throughout. Brett Slatkin returns with a second edition of Effective Python with a huge new list of Python idioms and straightforward recommendations, catching up with everything that’s introduced in version 3 all the way through 3.8 that we’ll all want to use as we finally leave Python 2 behind. Early sections lay out an enormous list of tips regarding new Python 3 syntaxes and concepts like string and byte objects, f-strings, assignment expressions (and their special nickname you might not know), and catch-all unpacking of tuples. Later sections take on bigger subjects, all of which are packed with things I either didn’t know or which I’m always trying to teach to others, including ‘Metaclasses and Attributes’ (good advice includes ‘Prefer Class Decorators over Metaclasses’ and also introduces a new magic method ‘init_subclass()’ I wasn’t familiar with), ‘Concurrency’ (favorite advice: ‘Use Threads for Blocking I/O, but not Parallelism,’ but it also covers asyncio and coroutines correctly) and ‘Robustness and Performance’ (advice given: ‘Profile before Optimizing’). It’s a joy to go through each section as everything I read is terrific best practice information smartly stated, and I’m considering quoting from this book in the future as it has such great advice all throughout. This is the definite winner for the ‘if you only read one Python book this year...’ contest. —Mike Bayer, Creator of SQLAlchemy

More testimonials are available under the link above.

Book website: https://effectivepython.com/ (Don't buy from Amazon, of course.)

If you're like me and prefer printed books, look up the ISBN at euro-book (which also offers portals for Brasil, Mexico and the USA) to find any affordable used copies.

 

This is a little tutorial that I found in my search to learn how to use getopt (mind: not getopts, which is a completely different thing). I want to share it here because I find it refreshingly to the point. Just the main code block already tells almost the whole story:

#!/bin/bash
# Set some default values:
ALPHA=unset
BETA=unset
CHARLIE=unset
DELTA=unset

usage()
{
  echo "Usage: alphabet [ -a | --alpha ] [ -b | --beta ]
                        [ -c | --charlie CHARLIE ] 
                        [ -d | --delta   DELTA   ] filename(s)"
  exit 2
}

PARSED_ARGUMENTS=$(getopt -a -n alphabet -o abc:d: --long alpha,bravo,charlie:,delta: -- "$@")
VALID_ARGUMENTS=$?
if [ "$VALID_ARGUMENTS" != "0" ]; then
  usage
fi

echo "PARSED_ARGUMENTS is $PARSED_ARGUMENTS"
eval set -- "$PARSED_ARGUMENTS"
while :
do
  case "$1" in
    -a | --alpha)   ALPHA=1      ; shift   ;;
    -b | --beta)    BETA=1       ; shift   ;;
    -c | --charlie) CHARLIE="$2" ; shift 2 ;;
    -d | --delta)   DELTA="$2"   ; shift 2 ;;
    # -- means the end of the arguments; drop this, and break out of the while loop
    --) shift; break ;;
    # If invalid options were passed, then getopt should have reported an error,
    # which we checked as VALID_ARGUMENTS when getopt was called...
    *) echo "Unexpected option: $1 - this should not happen."
       usage ;;
  esac
done

echo "ALPHA   : $ALPHA"
echo "BETA    : $BETA "
echo "CHARLIE : $CHARLIE"
echo "DELTA   : $DELTA"
echo "Parameters remaining are: $@"

Just be sure to correct the inadvertent mixing of beta and bravo.

[–] loveknight@programming.dev 1 points 5 months ago* (last edited 5 months ago) (1 children)

Perfect, thanks for the explanation. Indeed, I found the same solution via StackOverflow about simultaneously.

 

Edit: I figured it out: The solution is over a process substitution operator:

join <(echo "$var1") <(echo "$var2")

See also the comment by @notabot@piefed.social below, this StackOverflow comment, and the GNU documentation..


It's comparatively straightforward to use the content of one variable in join by using - to tell it to use standard input for that file:

echo $variable | join - anotherfile

However, is there a way to serve both input 'files' from variables, avoiding temporary files on the disk?

It seems like the easiest way would be via creating and mounting a temporary partition via tmpfs,

mount -t tmpfs -o size=50m tmpfs /mountpoint,

and just create temporary files in there. And afterwards clean things up.

So far I've also attempted here-documents, but apparently this too can only provide standard input, so that the other input still has to be served from a file.

Maybe one can also try doing it via named pipes (mkfifo), but I fear this could introduce lots of potentialities for errors.

[–] loveknight@programming.dev 1 points 5 months ago

Ah that's good to know about zsh.

Sorry regarding the second code block; it does indeed work as intended, and quite elegantly.

[–] loveknight@programming.dev 1 points 5 months ago* (last edited 5 months ago) (2 children)

For the first code snippet to run correctly, $list would need to be put in double quotes: echo "$list" | ... , because otherwise echo will conflate the various lines into a single line.

The for loop approach is indeed quite readable. ~~To make it solve the original task (which here means that it should also assign a number just smaller than $threshold to $tail, if $threshold is not itself contained in $list), one will have to do something in the spirit of what @Ephera@lemmy.ml and I describe in these comments.~~

 

Let me show you what I mean by giving an example:

# Assume we have this list of increasing numbers
140
141
145
180
190
...

# If we pick 150 as threshold, the output should consist of 145 and all larger values:
145
180
190
...

# In the edge case where 150 itself is in the list, the output should start with 150:
150
180
190
...

I guess one can always hack something together in awk with one or two track-keeper variables and a bit of control logic. However, is there a nicer way to do this, by using some nifty combination of simpler filters or awk functionalities?

One way would be to search for the line number n of the first entry larger than the threshold, and then print all lines starting with n-1. What command would be best suited for that?

Still, I'm also wondering: Can awk or any other standard tool do something like "for deciding whether to print this line, look at the next line"?

(In my use case, the list is short, so performance is not an issue, but maybe let's pretend that it were. Also, in my use case, all entries are unique, but feel free to work without this assumption.)

[–] loveknight@programming.dev 1 points 5 months ago

Thanks, that's good to know, I'll see how well I can adapt my workflow to this. (The reason for using Konsole tabs so far is the easy switching via Alt+[number], but I suppose using Helix's integrated multi-document system should offer other advantages (e.g. regarding registers) that could outweigh this by far.)

 

When I'm editing with helix, I often have multiple instances of it running, one for each file, in different terminals (more precisely: in different tabs of my terminal emulator, Konsole). Currently, the title of these tabs reads just "Directoryname: helix". It would be really helpful if the titles included the current filename, so that I could see which tab has which file opened. Is there a way to do this?

 

ISBN: 9780596005955

This book is extremely readable and gives a very good introduction to the various standard Unix shell commands (grep, sed, awk, tr, sort to name but a few) and how to tie them together to do useful things. It's very suitable if you have some experience with the command line at the level of individual commands but now want to see how to do construct more interesting pipelines and scripts. It includes an introduction to regular expressions. The fact that the book is already 20 years certainly means that some explanations and approaches are outdated, but since shell programming is at the core about text processing, almost all contents of the book are still highly relevant today.

 

Things I would like every young web engineer to learn:

  • anything you can do in CSS + HTML, you should do in CSS + HTML
  • framework du jour is not a platform, it's a high-interest loan against your future capacity. The platform is the platform
  • understanding the memory hierarchy always matters
  • client-side isn't easier than the server, and "generalists" usually suck at client-side. Mind the (packet) gap
  • managers who are not technical are not useful
  • put users first, always

Second-order things to learn:

  • the way browsers work isn't static, but it also isn't changing that fast. Learn as much as you can and update every few years; particularly about networking and the rendering loop.
  • JS is the slowest way to do anything on the web. Never let it become the way you do everything.
  • a11y isn't nice-to-have, it's the job
  • shipping fast almost never matters as much as quality, & there are simple heuristics you can use to understand the difference
[–] loveknight@programming.dev 2 points 6 months ago

New to this instance, but for me too it is comparatively sluggish since I started using it yesterday.

view more: next ›