this post was submitted on 01 Nov 2023
680 points (97.5% liked)

Technology

70395 readers
3668 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Highlights: The White House issued draft rules today that would require federal agencies to evaluate and constantly monitor algorithms used in health care, law enforcement, and housing for potential discrimination or other harmful effects on human rights.

Once in effect, the rules could force changes in US government activity dependent on AI, such as the FBI’s use of face recognition technology, which has been criticized for not taking steps called for by Congress to protect civil liberties. The new rules would require government agencies to assess existing algorithms by August 2024 and stop using any that don’t comply.

top 50 comments
sorted by: hot top controversial new old
[–] Cris_Color@lemmy.world 109 points 2 years ago (2 children)

I mean that broadly seems like a good thing. Execution is important, but on paper this seems like the kind of forward thinking policy we need

[–] pandacoder@lemmy.world 4 points 2 years ago

Quite frankly it didn't put enough restrictions on the various "national security" agencies, and so while it may help to stem the tide of irresponsible usage by many of the lesser-impact agencies, it doesn't do the same for the agencies that we know will be the worst offenders (and have been the worst offenders).

[–] KeraKali@lemmy.world 100 points 2 years ago* (last edited 2 years ago) (6 children)

“If the benefits do not meaningfully outweigh the risks, agencies should not use the AI,” the memo says. But the draft memo carves out an exemption for models that deal with national security and allows agencies to effectively issue themselves waivers if ending use of an AI model “would create an unacceptable impediment to critical agency operations.”

This tells me that nothing is going to change if people can just say their algoriths would make them too inefficient. Great sentiment but this loophole will make it useless.

[–] paris@lemmy.blahaj.zone 42 points 2 years ago (3 children)

This seems to me like an exception that would realistically only apply to the CIA, NSA, and sometimes the FBI. I doubt the Department of Housing and Urban Development will get a pass. Overall seems like a good change in a good direction.

[–] mememuseum@lemmy.world 30 points 2 years ago (8 children)

The CIA and NSA are exactly who we don't want using it though.

[–] Blackmist@feddit.uk 12 points 2 years ago

They're exactly who will carry on using it, even if there weren't any exemptions.

[–] kautau@lemmy.world 12 points 2 years ago (1 children)

Agreed but it’s at least a step forward, setting a precedent for AI in government use. I would love a perfect world where all bills passed are “all or nothing” legislation but realistically this is a good start, and then citizens should demand tighter oversight on national security agencies as the next issue to tackle

[–] pandacoder@lemmy.world 5 points 2 years ago

"next issue to tackle"

It's been the next issue to tackle since at least October 26th, 2001. They have no accountability. Adding these carve outs is just making it harder to get accountability.

[–] postmateDumbass@lemmy.world 5 points 2 years ago

Like either of those agencies will let us know what they are doing in the first place.

At a certain level, there are no rules when they never have to tell what they are doing.

[–] Dark_Arc@social.packetloss.gg 5 points 2 years ago

Well that and customs/border patrol

[–] Fedizen@lemmy.world 4 points 2 years ago

given the "success" of Israel's hi tech border fence it seems like bureacracies think tech will work better than actually, you know, resolving/preventing geopolitical problems with diplomacy and intelligence.

I worry these kind of tech solutions become a predictable crutch. Assuming there is some kind of real necessity to these spy programs (debatable) it seems like reliance on data tech can become a weakness as soon as those intending harm understand how it works

[–] Redrum714@lemm.ee 1 points 2 years ago* (last edited 2 years ago)

Well they already are lol. It makes their jobs much easier so I wouldn’t be surprised if they have better library’s than the public services.

[–] angstylittlecatboy@reddthat.com 1 points 2 years ago

I'd rather them not either, but don't underestimate the harm bad management of other organizations can and has done.

[–] intensely_human@lemm.ee -1 points 2 years ago

the fact that the CIA and NSA will have the AI is the most effective argument for why we should have the AI.

It’s the basic idea of the second amendment all over again:

  • It would be great if nobody had guns
  • But the government isn’t going to stop having guns
  • And only one side having guns is way worse than everyone having guns
  • So everyone gets to have guns

The exact same applies in this situation with AI:

  • It would be great if nobody had AI
  • But the government isn’t going to stop having AI
  • And only one side having AI is way worse than everyone having AI
  • So everyone gets to have AI
[–] postmateDumbass@lemmy.world 7 points 2 years ago (1 children)

Algorithms that gerrymander voting district boundries might be an early battleground.

[–] tacosplease@lemmy.world 2 points 2 years ago

The early battleground of 2010 when they started using RedMap.

load more comments (1 replies)
[–] postmateDumbass@lemmy.world 16 points 2 years ago

Folksy narrator: "Turns out, the U.S. government can not operate without racism."

[–] masquenox@lemmy.world 8 points 2 years ago

Great sentiment but

It's not a "great sentiment" - it's essentially just more of the same liberal "let's pretend we care by doing something completely ineffective" posturing and little else.

[–] intensely_human@lemm.ee 7 points 2 years ago

Democrats are so fucking naive. They actually think that a system of permission slips is sufficient to protect us from the singularity.

OpenAI’s original mission, before they forgot it, was the only workable method: distribute the AI far and wide to establish a multipolar ecosystem.

load more comments (1 replies)
[–] intensely_human@lemm.ee 20 points 2 years ago (3 children)

I swear to god there has to be an entire chapter in Gödel Escher Bach about how this is literally impossible.

[–] Sparlock@lemmy.world 4 points 2 years ago

Wow it's been years since I read GEB..
I should revisit it. Thanks!

[–] Asifall@lemmy.world 3 points 2 years ago

You’re never going to be able to formally prove anything as nebulous as “harm” full stop, so this isn’t a very convincing argument imo.

[–] postmateDumbass@lemmy.world 1 points 2 years ago

Achilles: /facepalm

[–] endlessmeddler@lemm.ee 9 points 2 years ago

Is it already too late for us? Does anyone truly believe that will be enough to protect us?

[–] thejodie@programming.dev 6 points 2 years ago

Sent to my state representative. Thanks!

load more comments
view more: next ›