technocrit

joined 1 year ago
[–] technocrit@lemmy.dbzer0.com -1 points 1 day ago (2 children)

Pretty sure "AI" didn't exist in the 60s/70s either.

[–] technocrit@lemmy.dbzer0.com 3 points 1 day ago (1 children)

Historically "AI" still doesn't exist.

[–] technocrit@lemmy.dbzer0.com 2 points 1 day ago* (last edited 1 day ago)

They're trying to compare "AI" to fire. If you don't see the point, I can't blame you.

[–] technocrit@lemmy.dbzer0.com 2 points 2 days ago* (last edited 2 days ago)
[–] technocrit@lemmy.dbzer0.com 18 points 2 days ago* (last edited 2 days ago)

Cops most likely commit more sexual assault than they prosecute.

Less than 1% of rapes lead to felony convictions.

archive: https://archive.is/8UEja

 

cross-posted from: https://lemm.ee/post/64452424

This is the first known time an American police department has relied on live facial recognition technology cameras at scale, and is a radical and dangerous escalation of the power to surveil people as we go about our daily lives.

According to The Washington Post, since 2023 the city has relied on face recognition-enabled surveillance cameras through the “Project NOLA” private camera network. These cameras scan every face that passes by and send real-time alerts directly to officers’ phones when they detect a purported match to someone on a secretive, privately maintained watchlist.

 

... I have one of those wearable devices that monitors my heart rate, sleep quality, activity level, and calories burned. Mine is called an Oura ring, and at the end of the day, it told me what I already knew: I had been “unusually stressed.” When this happens, the device asks you to log the source of your stress. I scrolled through the wide array of options—diarrhea, difficulty concentrating, erectile dysfunction, emergency contraceptives. I could not find “financial issues,” or anything remotely related to money, listed.

According to a poll from the American Psychiatric Association, financial issues are the No. 1 cause of anxiety for Americans: 58 percent say they are very or somewhat anxious about money. How, I wondered, was it possible that this had not occurred to a single engineer at Oura?

For all of the racial, gender, and sexual reckonings that America has undergone over the past decade, we have yet to confront the persistent blindness and stigma around class. When people struggle to understand the backlash against elite universities, or the Democrats’ loss of working-class voters, or the fact that more and more Americans are turning away from mainstream media, this is why...

Archive: https://archive.is/r42Ba

 

cross-posted from: https://lemm.ee/post/64450059

In 2012, Palantir quietly embedded itself into the daily operations of the New Orleans Police Department. There were no public announcements. No contracts made available to the city council. Instead, the surveillance company partnered with a local nonprofit to sidestep oversight, gaining access to years of arrest records, licenses, addresses, and phone numbers all to build a shadowy predictive policing program.

Palantir’s software mapped webs of human relationships, assigned residents algorithmic “risk scores,” and helped police generate “target lists” all without public knowledge. “We very much like to not be publicly known,” a Palantir engineer wrote in an internal email later obtained by The Verge.

After years spent quietly powering surveillance systems for police departments and federal agencies, the company has rebranded itself as a frontier AI firm, selling machine learning platforms designed for military dominance and geopolitical control.

"AI is not a toy. It is a weapon,” said CEO Alex Karp. “It will be used to kill people.”

[–] technocrit@lemmy.dbzer0.com 8 points 2 days ago* (last edited 2 days ago)

Academia grinds you down first but "employment" is what kills.

 

Many people in California prisons and jails work for less than $1 an hour. Lawmakers are advancing two bills that could lead to wage increases for some of them.

 

France will build a new high-security prison in its overseas territory of French Guiana to house drug traffickers and radical Islamists, the country's justice minister announced during a visit to the territory.

Gérald Darmanin told Le Journal du Dimanche (JDD) newspaper that the prison would target organised crime "at all levels" of the drug supply chain.

The €400m (£337m) facility, which could open as early as 2028, will be built in an isolated location deep in the Amazon jungle in the northwestern region of Saint-Laurent-du-Maroni.

The plan was announced after a series of violent incidents linked to criminal gangs which saw prisons and staff targeted across France in recent months.

The prison will hold up to 500 people, with a separate wing designed to house the most dangerous criminals.

In an interview with JDD, the minister said the new prison would be governed by an "extremely strict carceral regime" designed to "incapacitate the most dangerous drug traffickers".

Darmanin said the facility would be used to detain people "at the beginning of the drug trail", as well as serving as a "lasting means of removing the heads of the drug trafficking networks" in mainland France.

 

New Orleans police have reportedly spent years scanning live feeds of city streets and secretly using facial recognition to identify suspects in real time—in seeming defiance of a city ordinance designed to prevent false arrests and protect citizens' civil rights.

A Washington Post investigation uncovered the dodgy practice, which relied on a private network of more than 200 cameras to automatically ping cops' phones when a possible match for a suspect was detected. Court records and public data suggest that these cameras "played a role in dozens of arrests," the Post found, but most uses were never disclosed in police reports.

That seems like a problem, the Post reported, since a 2022 city council ordinance required much more oversight for the tech. Rather than instantly detaining supposed suspects the second they pop up on live feeds, cops were only supposed to use the tech to find "specific suspects in their investigations of violent crimes," the Post reported. And in those limited cases, the cops were supposed to send images to a "fusion center," where at least two examiners "trained in identifying faces" using AI software had to agree on alleged matches before cops approached suspects.

Instead, the Post found that "none" of the arrests "were included in the department’s mandatory reports to the city council." And at least four people arrested were charged with nonviolent crimes. Some cops apparently found the city council process too sluggish and chose to ignore it to get the most out of their access to the tech, the Post found.

Now, New Orleans police have paused the program amid backlash over what Nathan Freed Wessler, the deputy director of the American Civil Liberties Union (ACLU) Speech, Privacy, and Technology Project, suggested might be the sketchiest use of facial recognition yet in the US. He told the Post this is "the first known widespread effort by police in a major US city to use AI to identify people in live camera feeds for the purpose of making immediate arrests."

 

On May 12, California Governor Gavin Newsom, a Democrat, demanded that cities throughout the state adopt anti-camping ordinances that would effectively ban public homelessness by requiring unhoused individuals to relocate every 72 hours.

While presented as a humanitarian effort to reduce homelessness, the new policy victimizes California’s growing unhoused population—approximately 187,000 people—by tying funding in Proposition 1 to local laws banning sleeping or camping on public land.

In his announcement, Newsom pushed local governments to adopt the draconian ordinances “without delay.”

 

"The technology we're building today is not sufficient to get there," said Nick Frosst, a founder of the AI startup Cohere who previously worked as a researcher at Google and studied under the most revered AI researcher of the last 50 years. "What we are building now are things that take in words and predict the next most likely word, or they take in pixels and predict the next most likely pixel. That's very different from what you and I do." In a recent survey of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society that includes some of the most respected researchers in the field, more than three-quarters of respondents said the methods used to build today's technology were unlikely to lead to AGI.

Opinions differ in part because scientists cannot even agree on a way of defining human intelligence, arguing endlessly over the merits and flaws of IQ tests and other benchmarks. Comparing our own brains to machines is even more subjective. This means that identifying AGI is essentially a matter of opinion.... And scientists have no hard evidence that today's technologies are capable of performing even some of the simpler things the brain can do, like recognizing irony or feeling empathy. Claims of AGI's imminent arrival are based on statistical extrapolations — and wishful thinking. According to various benchmark tests, today's technologies are improving at a consistent rate in some notable areas, like math and computer programming. But these tests describe only a small part of what people can do.

Humans know how to deal with a chaotic and constantly changing world. Machines struggle to master the unexpected — the challenges, small and large, that do not look like what has happened in the past. Humans can dream up ideas that the world has never seen. Machines typically repeat or enhance what they have seen before. That is why Frosst and other sceptics say pushing machines to human-level intelligence will require at least one big idea that the world's technologists have not yet dreamed up. There is no way of knowing how long that will take. "A system that's better than humans in one way will not necessarily be better in other ways," Harvard University cognitive scientist Steven Pinker said. "There's just no such thing as an automatic, omniscient, omnipotent solver of every problem, including ones we haven't even thought of yet. There's a temptation to engage in a kind of magical thinking. But these systems are not miracles. They are very impressive gadgets."

 

cross-posted from: https://programming.dev/post/30550928

Fake photographs have been around as long as photographs have been around. A widely circulated picture of Abraham Lincoln taken during the presidential campaign of 1860 was subtly altered by the photographer, Mathew Brady, to make the candidate appear more attractive. Brady enlarged Lincoln’s shirt collar, for instance, to hide his bony neck and bulging Adam’s apple.

In a photographic portrait made to memorialize the president after his assassination, the artist Thomas Hicks transposed Lincoln’s head onto a more muscular man’s body to make the fallen president look heroic. (The body Hicks chose, perversely enough, was that of the proslavery zealot John C. Calhoun.)

By the close of the nineteenth century, photographic negatives were routinely doctored in darkrooms, through such techniques as double exposure, splicing, and scraping and inking. Subtly altering a person’s features to obscure or exaggerate ethnic traits was particularly popular, for cosmetic and propagandistic purposes alike.

But the old fakes were time-consuming to create and required specialized expertise. The new AI-generated “deepfakes” are different. By automating their production, tools like Midjourney and OpenAI’s DALL-E make the images easy to generate—you need only enter a text prompt. They democratize counterfeiting. Even more worrisome than the efficiency of their production is the fact that the fakes conjured up by artificial intelligence lack any referents in the real world. There’s no trail behind them that leads back to a camera recording an image of something that actually exists. There’s no original that was doctored. The fakes come out of nowhere. They furnish no evidence.

Many fear that deepfakes, so convincing and so hard to trace, make it even more likely that people will be taken in by lies and propaganda on social media. A series of computer-generated videos featuring a strikingly realistic but entirely fabricated Tom Cruise fooled millions of unsuspecting viewers when it appeared on TikTok in 2021. The Cruise clips were funny. That wasn’t the case with the fake, sexually explicit images of celebrities that began flooding social media in 2024. In January, X was so overrun by pornographic, AI-generated pictures of Taylor Swift that it had to temporarily block users from searching the singer’s name.

 

In a statement published Thursday, the company acknowledged public and internal concerns about whether Microsoft Azure and AI products had been used “to target civilians or cause harm” in Gaza.

The statement follows months of pressure from Microsoft employees and rights groups, demanding transparency over its relationship with the Israeli military.

The company said it does not have visibility into how customers use Microsoft products on private servers or devices, and that cloud operations for IMOD are supported “through contracts with cloud providers other than Microsoft.”

 

YouTube on Wednesday announced a new tool that will allow advertisers to use Google’s Gemini AI model to target ads to viewers when they are most engaged with a video.

The artificial intelligence feature, called “Peak Points,” identifies times when videos receive elevated levels of viewer attention and packages ads to be placed after those moments.

 

... Toner-Rodgers told a story of a thousand material science researchers, at an unnamed company, who used a machine learning system to generate possible new materials. With the AI, they found 44% more new materials, patent filings went up 39% and new product prototypes went up 17%. Incredible! Though he did say the scientists felt alienated from their work.

The paper exploded. Economists loved it! Toner-Rodgers submitted the paper to the Quarterly Journal of Economics.

More importantly, the paper told the AI promoters what they wanted to hear — the bosses don’t care about the disgruntled workers, but they do really want more output...

Robert Palgrave, a professor of inorganic chemistry and materials science at UCL, goes through a pile of things about the paper that struck him as not quite right, both in December 2024 and just recently:

The original paper was too good to be true in many ways. How could a 2nd year PhD student get pervasive access to extremely sensitive data from what must have been a multi billion dollar company?

How could such a study have been set up years before this student started his work PhD, in just the right way to deliver results to him?

What company really has 1000 scientists all trying to discover new materials all day every day? It didn’t really make sense as a concept.

But there were technical points too. AI, using atomistic methods like DFT, cannot predict most of the types of materials that were supposedly studied. It can only really work for simple crystalline materials. Glasses, biomaterials? No chance...

view more: ‹ prev next ›