chkno

joined 2 years ago
[–] chkno@lemmy.ml 4 points 2 years ago

I ran Gentoo for ~15 years and then switched to NixOS ~3 years ago. The last straw was Gentoo bug 676264, where I submitted version bump & build fix patches to fix security issues and was ignored for three months.

In Gentoo, glsa-check only tells you about security vulnerabilities after there's a portage update that would resolve it. I.e., for those three months, all Gentoo users had a ghostscript with widely-known vulnerabilities and glsa-check was silent about it. I'm not cherry-picking this example—this was one of my first attempts to help be proactive about security updates & found that the process is not fit for purpose. And most fixed vulnerabilities don't even get GLSA advisories—the advisories have to be created manually. Awhile back, I had made a 'gentle update' script that just updated packages glsa-check complained about. It turns out that's not very useful.

Contrast this with vulnix, a tool in Nix/NixOS which directly fetches the vulnerability database from nvd.nist.gov (with appropriate polite local caching) and directly checks locally installed software against it. You don't need the Nix project to do anything for this to Just Work; it's always comprehensive. I made a NixOS upgrade script that uses vulnix to show me a diff of security issues as it does a channel update. Example output:

commit ...
Author: <me>
Date:   Sat Jun 17 2023

    New pins for security fixes

    -9.8    CVE-2023-34152  imagemagick
    -7.8    CVE-2023-34153  imagemagick
    -7.5    CVE-2023-32067  c-ares
    -7.5    CVE-2023-28319  curl
    -7.5    CVE-2023-2650   openssl
    -7.5    CVE-2023-2617   opencv
    -7.5    CVE-2023-0464   openssl
    -6.5    CVE-2023-31147  c-ares
    -6.5    CVE-2023-31124  c-ares
    -6.5    CVE-2023-1972   binutils
    -6.4    CVE-2023-31130  c-ares
    -5.9    CVE-2023-32570  dav1d
    -5.9    CVE-2023-28321  curl
    -5.9    CVE-2023-28320  curl
    -5.9    CVE-2023-1255   openssl
    -5.5    CVE-2023-34151  imagemagick
    -5.5    CVE-2023-32324  cups
    -5.3    CVE-2023-0466   openssl
    -5.3    CVE-2023-0465   openssl
    -3.7    CVE-2023-28322  curl

diff --git a/channels b/channels

a/channels +++ b/channels @@ -8,23 +8,23 @@ [nixos] git_repo = https://github.com/NixOS/nixpkgs.git git_ref = release-23.05 -git_revision = 3a70dd92993182f8e514700ccf5b1ae9fc8a3b8d -release_name = nixos-23.05.419.3a70dd92993 -tarball_url = https://releases.nixos.org/nixos/23.05/nixos-23.05.419.3a70dd92993/nixexprs.tar.xz -tarball_sha256 = 1e3a214cb6b0a221b3fc0f0315bc5fcc981e69fec9cd5d8a9db847c2fae27907 +git_revision = c7ff1b9b95620ce8728c0d7bd501c458e6da9e04 +release_name = nixos-23.05.1092.c7ff1b9b956 +tarball_url = https://releases.nixos.org/nixos/23.05/nixos-23.05.1092.c7ff1b9b956/nixexprs.tar.xz +tarball_sha256 = 8b32a316eb08c567aa93b6b0e1622b1cc29504bc068e5b1c3af8a9b81dafcd12

[–] chkno@lemmy.ml 1 points 2 years ago* (last edited 2 years ago)

Pocket Ref & pocket US Constitution.

[–] chkno@lemmy.ml 1 points 2 years ago* (last edited 2 years ago) (1 children)

The benefit of using something fancier than rsync is that you get a point-in-time recovery capability.

For example, if you switch the enclosures weekly, rsync gives you two recovery options: restore to yesterday's state (from the enclosure not in the safe) and restore to a state from 2-7 days ago (from the one in the safe, depending on when it went into the safe).

Daily incremental backups with a fancy tool like dar let you restore to any previous state. Instead of two options, you have hundreds of options, one for each day. This is useful when you mess up something in the archive (eg: accidentally delete or overwrite it) and don't notice right away: It appeared, was ok for awhile, then it was bad/gone and that bad/gone state was backed up. It's nice to be able to jump back in time to the brief it-was-ok state & pluck the content back out.

If you have other protections against accidental overwrite (like you only back up git repos that already capture the full history, and you git fsck them regularly) — then the fancier tools don't provide much benefit.

I just assumed that you'd want this capability because many folks do and it's fairly easy to get with modern tools, but if rsync is working for you, no need to change.

[–] chkno@lemmy.ml 1 points 2 years ago* (last edited 2 years ago) (3 children)

Sounds fine?

Yes: Treat the two enclosures independently and symmetrically, such that you can fully restore from either one (the only difference would be that the one in the safe is slightly stale) and the ongoing upkeep is just:

  1. Think: "Oh, it's been awhile since I did a swap" (or use a calendar or something)
  2. Unplug the drive at the computer.
  3. Cary it to the safe.
  4. Open the safe.
  5. Take the drive in the safe out.
  6. Put the other drive in the safe.
  7. Close the safe.
  8. Cary the other drive to the computer.
  9. Plug it in.
  10. (Maybe: authenticate for the drive encryption if you use normal full-disk encryption & don't cache the credential)

If I assume a normal incremental backup setup, both enclosures would have a full backup and a pile of incremental backups. For example, if swapped every three days:

Enclosure A        Enclosure B
-----------------  ---------------
a-full-2023-07-01
a-incr-2023-07-02
a-incr-2023-07-03
                   b-full-2023-07-04
                   b-incr-2023-07-05
                   b-incr-2023-07-06
a-incr-2023-07-07
a-incr-2023-07-08
a-incr-2023-07-09
                   b-incr-2023-07-10
                   b-incr-2023-07-11
                   b-incr-2023-07-12
a-incr-2023-07-13
....

The thing taking the backups need not even detect or care which enclosure is plugged in -- it just uses the last incremental on that enclosure to determine what's changed & needs to be included in the next incremental.

Nothing need care about the number or identity of enclosures: You could add a third if, for example, you found an offsite location you trust. Or when one of them eventually fails, you'd just start using a new one & everything would Just Work. Or, if you want to discard history (eg: to get back the storage space used by deleted files), you could just wipe one of them & let it automatically make a new full backup.

Are you asking for help with software? This could be as simple as dar and a shell script.

My personal preference is to tell the enclosure to not try any fancy RAID stuff & just present all the drives directly to the host, and then let the host do the RAID stuff (with lvm or zfs or whatever), but I understand opinions differ. I like knowing I can always use any other enclosure or just plug the drives in directly if/when the enclosure dies.

I notice you didn't mention encryption, maybe because that's obvious these days? There's an interesting choice here, though: You can do normal full-disk encryption, or you could encrypt the archives individually. Dar actually has an interesting feature here I haven't seen in any other backup tool: If you keep a small --aux file with the metadata needed for determining what will need to go in the next incremental, dar can encrypt the backup archives asymmetrically to a GPG key. This allows you to separate the capability of writing backups and the capability of reading backups. This is neat, but mostly unimportant because the backup is mostly just a copy of what's on the host. It comes into play only when accessing historical files that have been deleted on the host but are still recoverable from point-in-time restore from the incremental archives -- this becomes possible only with the private key, which is not used or needed by any of the backup automation, and so is not kept on the host. (You could also, of course, do both full-disk encryption and per-archive encryption if you want the neat separate-credential for deleted files trick and also don't want to leak metadata about when backups happen and how large the incremental archives are / how much changed.) (If you don't full-disk-encrypt the enclosure & rely only on the per-archive encryption, you'd want to keep the small --aux files on the host, not on the enclosure. The automation would need to keep one --aux file per enclosure, & for this narrow case, it would need to identify the enclosures to make sure it uses that enclosure's --aux file when generating the incremental archive.)

[–] chkno@lemmy.ml 1 points 2 years ago* (last edited 2 years ago) (1 children)

If you can get state boundary image data coincident with height map data (such as by taking two screenshots on the USGS website, one with the heightmap data opaque and with it translucent, without panning or zooming between), you could use a normal image editor (eg: GIMP) to mask the height map data so that it's zero (black) outside the state boundary and at least slightly gray inside the state boundary. With OpenSCAD's surface, this would give you a rectangle that's flat outside the state and at some minimum height inside the state. You could then use one difference or intersection to cut across the model by height, trimming off the flat rectangular base.

(I.e., doing the trimming the image seems much easier than trimming an STL, & would totally work.)

[–] chkno@lemmy.ml 1 points 2 years ago (1 children)

As a bed scraper, I use a putty knife that I've sharpened on one side (chisel grind, #4).

Before printing too-sticky materials (like TPU on my PEI bed), I put down a layer of glue stick. This is sticky enough for successful prints, but easily removed at the end of the print.

[–] chkno@lemmy.ml 1 points 2 years ago

Have the thing that uses obj take it as a normal constructor argument, and have a convenience wrapper that supplies it:

from contextlib import contextmanager


@contextmanager
def context():
    x = ['hi']
    yield x
    x[0] = 'there'


class ObjUser:
    def __init__(self, obj):
        self.obj = obj

    def use_obj(self):
        print(self.obj)


@contextmanager
def MakeObjUser():
    with context() as obj:
        yield ObjUser(obj)


with MakeObjUser() as y:
    y.use_obj()
[–] chkno@lemmy.ml 1 points 2 years ago* (last edited 2 years ago)
  1. Paper tokens: Produce 100 billion authentication tokens (could be passwords, could be private keys of signed certificates), print them on thick paper, fold them up, publicly stir them in giant vats at their central manufacturing location before distributing them to show that no record is being kept of where each token is being geographically routed to, and then have them freely available in giant buckets at any establishment that already does age-checks for any other reason (bars, grocery stores that sell alcohol or tobacco, etc.). The customer does the usual age-verification ritual, then reaches into the bucket and themselves randomly selects any reasonable number of paper tokens to take with them. It should be obvious to all parties that no record is being kept of which human took which token.

  2. Require these tokens to be used for something besides mature-content access. Maybe for filing your taxes, opening bank accounts, voting, or online alcohol / tobacco purchases. This way, people requesting these tokens do not divulge that they are mature-content consumers.

[–] chkno@lemmy.ml 0 points 2 years ago* (last edited 2 years ago) (3 children)
view more: ‹ prev next ›