kevincox

joined 4 years ago
MODERATOR OF
[–] kevincox@lemmy.ml 11 points 6 months ago

I switched to Immich recently and am very happy.

  1. Immich's face detection is much better, very rarely fails. Especially for non-white faces. But even for white faces PhotoPrisim regularly needed me reviewing the unmatched faces. I also needed to really turn up the "what is a face" threshold because otherwise it would miss a ton of clear faces. (Then it only missed some, but also has tons of false positives). On the other hand Immich just works.
  2. Immich's UI is much nicer overall. Lots of small affordances. For example the menu item to "view in timeline" is worth switching alone. Also good riddance to PhotoPrism's persistent and buggy selection. Someone must have worked really hard on implementing this but it was really just a bad idea.
  3. Immich has an app with uploading, and it allows you to view local and uploaded photos in one interface which is a huge UX win. I couldn't find a good Android app for uploading to photoprism. You could set up import delays and stuff but you would still regularly get partially uploaded files imported and have to clean it up manually.
  4. Immich's search by content is much better. For example searching for "cat with red and yellow ball" was useless on PhotoPrism, but I found tons of the results I was looking for on Immich.

The bad:

  1. There is currently a terrible jank in the Immich app which makes videos unusable and everything painful. Apparently this is due to some Album sync process running in the main thread. They are working on it. I can't fathom how a few hundred albums causes this much lag but 🀷 There is also even worse lag on the location view page, but at least that is just one page.
  2. The Immich app has a lot less features than the website. But the website works very well on mobile so even just using the website (and the app for uploading) is better than PhotoPrism here. The fundamentals are good but it just needs more work.
  3. I liked PhotoPrism's advanced filters. They were very limited but at least they were there.
  4. Not being able to sort search results by date is a huge usability issue. I often know roughly when the photo I want to find was taken and being able to order by date would be hugely helpful.
  5. You have to eagerly transcode all videos. There is no way to clean up old transcodes and re-transcode on the fly. To be fair the PhotoPrism story also wasn't great because you had to wait for the full video to be transcoded before starting, leading to a huge delay for videos more than a few seconds, but at least I could save a few hundred gigs of disk space.

Honestly a lot of stuff in PhotoPrism feels like one developer has a weird workflow and they optimized it for that. Most of them are counter to what I actually want to do (like automatic title and description generation, or the review stuff, or auto quality rating). Immich is very clearly inspired by Google Photos and takes a lot of things directly from it, but that matches my use case way better. (I was pretty happy with Google Photos until they started refusing to give access to the originals.)

[–] kevincox@lemmy.ml 18 points 6 months ago

Most Intel GPUs are great at transcoding. Reliable, widely supported and quite a bit of transcoding power for very little electrical power.

I think the main thing I would check is what formats are supported. If the other GPU can support newer formats like AV1 it may be worth it (if you want to store your videos in these more efficient formats or you have clients who can consume these formats and will appreciate the reduced bandwidth).

But overall I would say if you aren't having any problems no need to bother. The onboard graphics are simple and efficient.

[–] kevincox@lemmy.ml 2 points 6 months ago

Yes. As this is a workstation the memory use is highly variable, >95% of the time I would probably barely notice having 32GiB. But other times it is a huge performance win to have that capacity available. Sometimes I am compiling lots of stuff and 32 compilers running + ample disk cache is very important. Other times I am processing lots of data and other times I am running a few VMs.

It is a bit of a luxury. I think if I was on a tighter budget I would have gone for 64GiB. However the price difference wasn't that much and at least a handful of times I have been quite happy to have that capacity available. And worst case I just have everything sitting in disk cache after a warm up which is a small performance win on every small task.

[–] kevincox@lemmy.ml 2 points 6 months ago (2 children)

I have enough disk space.

Plus my /tmp is a ramdisk and sometimes I compile large things in there (Firefox) so it is nice to let it be flushed out to disk if there are more important uses for that RAM than holding a file that most likely won't be read again.

[–] kevincox@lemmy.ml 2 points 6 months ago

There are three parts to the whole push system.

  1. A push protocol. You get a URL and post a message to it. That message is E2EE and gets delivered to the application.
  2. A way to acquire that URL.
  3. A way to respond to those notifications.

My point is that 1 is the core and already available across devices including over Google's push notification system and making custom push servers is very easy. It would make sense to keep that interface, but provide alternatives to 2 and 3. This way browsers can use the JS API for 2 and 3, but other apps can use a different API. The push server and the app server can remain identical across browsers, apps and anything else. This provides compatibility with the currently reigning system, the ability to provide tiny shims for people who don't want to self host and still maintains the option to fully self host as desired.

[–] kevincox@lemmy.ml 3 points 6 months ago (4 children)
% free -h
               total        used        free      shared  buff/cache   available
Mem:           125Gi        15Gi        90Gi       523Mi        22Gi       110Gi
Swap:           63Gi          0B        63Gi

I'll use it eventually. Just gotta let the disk cache warm up.

[–] kevincox@lemmy.ml 1 points 6 months ago

I don’t want the end executable to have to bundle these files and re-parse them each time it gets run.

No matter how you persist data you will need to re-parse it. The question is really just if the new format is more efficient to read than the old format. Some formats such as FlatBuffers and Cap'n Proto are designed to have very efficient loading processes.

(Well technically you could persist the process image to disk, but this tends to be much larger than serialized data would be and has issues such as defeating ASLR. This is very rarely done.)

Lots of people are talking about Pickle. But it isn't particularly fast. That being side with Python you can't expect much to start with.

[–] kevincox@lemmy.ml 1 points 6 months ago

Must be because Factorio released 2.0 and the Space Age DLC recently.

[–] kevincox@lemmy.ml 2 points 6 months ago (2 children)

IMHO UnifiedPush is just a poor re-implementation of WebPush which is an open and distributed standard that supports (and in the browser requires, so support is universal) E2EE.

UnifiedPush would be better as a framework for WebPush providers and a client API. But use the same protocol and backends as WebPush (as how to get a WebPush endpoint is defined as a JS API in browsers, would would need to be adapted).

[–] kevincox@lemmy.ml 16 points 6 months ago (1 children)

Why are these TypeScript + JSX rather than just SVGs? It seems that the paths are defined as SVG but they are using some JavaScript framework to define the animations rather than just using SVG or CSS animations.

[–] kevincox@lemmy.ml 8 points 6 months ago (1 children)

Why WASM? It seems to me that the attack surface of WASM is negligible compared to JavaScript (and IIUC disabling JavaScript will also disable WASM).

Third-party frames is definitely a good way to reduce your attack surface though. Ad embeds are often used to distribute exploits.

[–] kevincox@lemmy.ml 1 points 7 months ago

I paid for GPM for quite a while. I then started working at Google and beta tested YouTube Music from very early on and gave lots of feedback about how it sucked. When they shut down GPM I cancelled my YouTube Premium membership and installed an ad blocker. Not just YTM but so many things about YouTube were getting worse and worse and I couldn't find it in myself to keep paying for a service that kept removing features.

 

Is there any service that will speak LDAP but just respond with the local UNIX users?

Right now I have good management for local UNIX users but every service wants to do its own auth. This means that it is a pain of remembering different passwords, configuring passwords on setting up a new service and whatnot.

I noticed that a lot of services support LDAP auth, but I don't want to make my UNIX user accounts depend on LDAP for simplicity. So I was wondering if there was some sort of shim that will talk the LDAP protocol but just do authentication against the regular user database (PAM).

The closest I have seen is the services.openldap.declarativeContents NixOS option which I can probably use by transforming my regular UNIX settings into an LDAP config at build time, but I was wondering if there was anything simpler.

(Related note: I really wish that services would let you specify the user via HTTP header, then I could just manage auth at the reverse-proxy without worrying about bugs in the service)

 
 

I'm reconsidering my terminal emulator and was curious what everyone was using.

 

cross-posted from: https://beehaw.org/post/551377

Recently my kernel started to panic every time I awoke my monitors from sleep. This seemed to be a regression; it worked one day, then I received a kernel upgrade from upstream, and the next time I was operating my machine it would crash when I came back to it.

After being annoyed for a bit, I realized this was a great time to learn how to bisect the git kernel, find the problem, and either report it upstream, or, patch it out of my kernel! I thought this would be useful to someone else in the future, so here we are.

Step #1: Clone the Kernel; I grabbed Linus' tree from https://github.com/torvalds/linux with git clone git@github.com:torvalds/linux.git

Step #2: Start a bisect.

If you're not familiar with a bisect, it's a process by which you tell git, "this commit was fine", and "this commit was broken", and it will help you test the commits in-between to find the one that introduced the problem.

You start this by running git bisect start, and then you provide a tag or commit ID for the good and the bad kernel with git bisect good ... and git bisect bad ....

I knew my issue didn't occur on the 5.15 kernel series, but did start with my NixOS upgrade to 6.1. But I didn't know precisely where, so I aimed a little broader... I figured an extra test or two would be better than missing the problem. 😬

git bisect start
git bisect good v5.15
git bisect bad master 

Step #3: Replace your kernel with that version

In an ideal world, I would have been able to test this in a VM. But it was a graphics problem with my video card and connected monitors, so I went straight for testing this on my desktop to ensure it was easy to reproduce and accurate.

Testing a mid-release kernel with NixOS is pretty easy! All you have to do is override your kernel package, and NixOS will handle building it for you... here's an example from my bisect:

boot.kernelPackages = pkgs.linuxPackagesFor (pkgs.linux_6_2.override { # (#4) make sure this matches the major version of the kernel as well
  argsOverride = rec {
    src = pkgs.fetchFromGitHub {
      owner = "torvalds";
      repo = "linux";
      # (#1) -> put the bisect revision here
      rev = "7484a5bc153e81a1740c06ce037fd55b7638335c";
      # (#2) -> clear the sha; run a build, get the sha, populate the sha
      sha256 = "sha256-nr7CbJO6kQiJHJIh7vypDjmUJ5LA9v9VDz6ayzBh7nI=";
    };
    dontStrip = true;
    # (#3) `head Makefile` from the kernel and put the right version numbers here
    version = "6.2.0";
    modDirVersion = "6.2.0-rc2";
    # (#4) `nixos-rebuild boot`, reboot, test.
  };
});

Getting this defined requires a couple intermediate steps... Step #3.1 -- put the version that git bisect asked me to test in (#1) Step #3.2 -- clear out sha256 Step #3.3 -- run a nixos-rebuild boot Step #3.4 -- grab the sha256 and put it into the sha256 field (#2) Step #3.5 -- make sure the major version matches at (#3) and (#4)

Then run nixos-rebuild boot.

Step #4: Test!

Reboot into the new kernel, and test whatever is broken. For me I was able to set up a simple test protocol: xset dpms force off to blank my screens, wait 30 seconds, and then wake them. If my kernel panicked then it was a fail.

Step #5: Repeat the bisect

Go into the linux source tree and run git bisect good or git bisect bad depending on whether the test succeeded. Return to step #3.

Step #6: Revert it!

For my case, I eventually found a single commit that introduced the problem, and I was able to revert it from my local kernel. This involves leaving a kernel patch in my NixOS config like this:

  boot.kernelPatches = [
    { patch = ./revert-bb2ff6c27b.patch; name = "revert-bb2ff6c27b"; }
  ];

This probably isn't the greatest long-term solution, but it gets my desktop stable and I'm happy with that for now.

Profit!

1
SaaS RSS hosting (www.rss-hosting.com)
view more: next β€Ί