diz

joined 2 years ago
[–] diz@awful.systems 4 points 2 months ago

I wonder if the weird tags are even strictly necessary, or if a sufficiently strongly worded and repetitive message would suffice.

[–] diz@awful.systems 4 points 2 months ago* (last edited 2 months ago) (2 children)

Embryo selection may just be the eugenicist's equivalent of greenwashing.

Eugenicists doing IVF is kind of funny, since it is a procedure that circumvents natural selection quite a bit, especially for the guys. It's what, something like billion to one for the sperm?

If they're doing IVF, being into eugenics, they need someone to tell them that they aren't "worsening the species", and the embryo selection provides just that.

edit: The worse part would be if people who don't need IVF start doing IVF with embryo selection, expecting some sort of benefit for the offspring. With American tendency to sell people unnecessary treatments and procedures, I can totally see that happening.

[–] diz@awful.systems 6 points 2 months ago* (last edited 2 months ago)

I think I have a real example. Non hierarchical (or, at least, less hierarchical) arrangements. Anarchy is equated with chaos.

Anything in nature we ascribe a hierarchy to; ants or other hymenoptera and termites have supposed "queens", parent wolves are "alphas" and so on. Fictional ant-like aliens have brain bugs, or cerebrates, or the like. Even the fucking zombies infected with a variant of the rabies virus get alphas somehow.

Every effort has went into twisting every view on reality and every fiction to align with the ideology.

[–] diz@awful.systems 9 points 2 months ago* (last edited 2 months ago)

I think it's a mixture of it being cosplay and these folks being extreme believers in capitalism, in the inevitability of it and impossibility of any alternative. They are all successful grifters, and they didn't get there through some scheming and clever deception, they got there through sincere beliefs that aligned with the party line.

They don't believe that anything can actually be done about this progression towards doom, just as much as they don't properly believe in the doom.

[–] diz@awful.systems 1 points 2 months ago

So it got them so upset presumably because they thought it mocked the basilisk incident, I guess with Roko as Laurentius and Yudkowsky as the other guy?

[–] diz@awful.systems 4 points 2 months ago

Isn’t it part of the lawsuit that one of the developers literally said that downloading torrents on a corporate machine feels wrong?

That they routinely use bittorrent protocol for data only makes it more willful, since they know how it works while your average Joe may not understand that he is distributing anything.

[–] diz@awful.systems 3 points 2 months ago* (last edited 2 months ago)

Film photography is my hobby and I think that there isn’t anything that would prevent from exposing a displayed image on a piece of film, except for the cost.

Glass plates it is, then. Good luck matching the resolution.

In all seriousness though I think your normal set up would be detectable even on normal 35mm film due to 1: insufficient resolution (even at 4k, probably even at 8k), and 2: insufficient dynamic range. There would probably also be some effects of spectral response mismatch - reds that are cut off by the film’s spectral response would be converted into film-visible reds by a display. Il

Detection of forgery may require use of a microscope and maybe some statistical techniques. Even if the pixels are smaller than film grains, pixels are on a regular grid and film grains are not.

Edit: trained eyeballing may also work fine if you are familiar with the look of that specific film.

[–] diz@awful.systems 5 points 2 months ago* (last edited 2 months ago)

Hmm, maybe too premature - chatgpt has history on by default now, so maybe that's where it got the idea it was a classic puzzle?

With history off, it still sounds like it has the problem in the training dataset, but it is much more bizarre:

https://markdownpastebin.com/?id=68b58bd1c4154789a493df964b3618f1

Could also be randomness.

Select snippet:

Example 1: N = 2 boats

Both ferrymen row their two boats across (time = D/v = 1/3 h). One ferryman (say A) swims back alone to the west bank (time = D/u = 1 h). That same ferryman (A) now rows the second boat back across (time = 1/3 h). Meanwhile, the other ferryman (B) has just been waiting on the east bank—but now both are on the east side, and both boats are there.

Total time

$$ T_2 ;=; \frac{1}{3} ;+; 1 ;+; \frac{1}{3} ;=; \frac{5}{3}\ \mathrm{hours} \approx 1,\mathrm{h},40,\mathrm{min}. $$

I have to say with history off it sounds like an even more ambitious moron. I think their history thing may be sort of freezing bot behavior in time, because the bot sees a lot of past outputs by itself, and in the past it was a lot less into shitting LaTeX all over the place when doing a puzzle.

[–] diz@awful.systems 10 points 2 months ago (1 children)

Now we need to make a logic puzzle involving two people and one cup. Perhaps they are trying to share a drink equitably. Each time they drink one third of remaining cup’s volume.

[–] diz@awful.systems 15 points 2 months ago* (last edited 2 months ago) (3 children)

Yeah that's the version of the problem that chatgpt itself produced, with no towing etc.

I just find it funny that they would train on some sneer problem like this, to the point of making their chatbot look even more stupid. A "300 billion dollar" business, reacting to being made fun of by a very small number of people.

[–] diz@awful.systems 9 points 2 months ago* (last edited 2 months ago)

Oh wow it is precisely the problem I "predicted" before: there are surprisingly few production grade implementations to plagiarize from.

Even for seemingly simple stuff. You might think parsing floating point numbers from strings would have a gazillion examples. But it is quite tricky to do it correctly (a correct implementation allows you to convert a floating point number to a string with enough digits, and back, and always obtain precisely the same number that you started with). So even for such omnipresent example, which has probably been implemented well over 10 000 times by various students, if you start pestering your bot with requests to make it better, if you have the bots write the tests and pass them, you could end up plagiarizing something identifiable.

edit: and even suppose there were 2, or 3, or 5 exfat implementations. They would be too different to "blur" together. The deniable plagiarism that they are trying to sell - "it learns the answer in general from many implementations, then writes original code" - is bullshit.

[–] diz@awful.systems 1 points 3 months ago

I'm kind of dubious its effective in any term whatsoever, unless the term is "nothing works but we got a lot of it".

view more: ‹ prev next ›