this post was submitted on 05 Jul 2024
50 points (98.1% liked)

art

22235 readers
22 users here now

A community for sharing and discussing art, aesthetics, and music relating to '80s, '90s, and '00s retro microgenres and also art in general now!

Some cool genres and aesthetics include:

If you are unsure if a piece of media is on theme for this community, you can make a post asking if it fits. Discussion posts are encouraged, and particularly interesting topics will get pinned periodically.

No links to a store page or advertising. Links to bandcamps, soundclouds, playlists, etc are fine.

founded 4 years ago
MODERATORS
 

cross-posted from: https://midwest.social/post/14150726

But just as Glaze's userbase is spiking, a bigger priority for the Glaze Project has emerged: protecting users from attacks disabling Glaze's protections—including attack methods exposed in June by online security researchers in Zurich, Switzerland. In a paper published on Arxiv.org without peer review, the Zurich researchers, including Google DeepMind research scientist Nicholas Carlini, claimed that Glaze's protections could be "easily bypassed, leaving artists vulnerable to style mimicry."

top 5 comments
sorted by: hot top controversial new old
[–] Riffraffintheroom@hexbear.net 18 points 10 months ago (1 children)

Google DeepMind research scientist Nicholas Carlini, claimed that Glaze's protections could be "easily bypassed, leaving artists vulnerable to style mimicry."

Remember when tech bros tried to appear cool and benevolent and different from the mean old business tycoons of the past? They never were, but it’s pretty wild how quickly they’ve decided to become just nakedly evil.

[–] Assian_Candor@hexbear.net 6 points 10 months ago

Capitalists gonna capitalist

[–] KobaCumTribute@hexbear.net 16 points 10 months ago* (last edited 10 months ago)

The big issue with all these data-poisoning attempts is that they're all just introducing noise via visible watermarking in order to try to introduce noise back into what are effectively extremely aggressive de-noising algorithms to try to associate training keywords with destructive noise. In practice, their result has been to either improve the quality of models trained on a dataset containing some poisoned images because for some reason adding more noise to the inscrutable anti-noise black box machine makes it work better, or to just be completely wiped out with a single low de-noise pass to clean the poisoned images.

Like literally within hours of the poisoning models being made public preliminary hobbyist testing was finding that they didn't really do what they were claiming (they make highly visible, distracting watermarks all over the image and they don't bother training algorithms as much as claimed or possibly even at all) and could be trivially countered as well.

[–] DragonBallZinn@hexbear.net 13 points 10 months ago* (last edited 10 months ago)

Nothing like porky lecturing us on respecting property rights when shutting down 30 year old ROMs, but them thinking the IPs of poor people should be shared with them: free of charge.

Plus, don't they have anything better to automate? Are you that bereft of ideas that automating away a hobby is your TOP PRIORITY!?

[–] peppersky@hexbear.net 5 points 10 months ago

The trick is to only draw extremely vulgar and obscene images that'd have to be filtered out of any dataset a company could possibly sell