Technology

90 readers
201 users here now

Share interesting Technology news and links.

Rules:

  1. No paywalled sites at all.
  2. News articles has to be recent, not older than 2 weeks (14 days).
  3. No videos.

To encourage more original sources and keep this space commercial free as much as I could, the following websites are Blacklisted:

Encouraged:

founded 1 week ago
MODERATORS
51
 
 

Backlog getting you down? Drowning in technical debt? Delegate issues to Copilot so you can focus on the creative, complex, and high-impact work that matters most. Copilot coding agent makes this possible.

Simply assign an issue (or multiple issues) to Copilot just as you would another developer. You can do this from github.com, GitHub Mobile, or the GitHub CLI. Copilot works in the background, using its own secure cloud-based development environment powered by GitHub Actions. Copilot explores the repository, makes changes, and even validates its work with your tests and linter before it pushes.

52
26
submitted 4 days ago* (last edited 4 days ago) by Pro@programming.dev to c/Technology@programming.dev
 
 

An advisor to al-Qaida. One of the founders of Hezbollah. The head of an Iraqi militia group known for attacks on U.S. troops. And a top official with the Houthi rebels who recently lashed out at the “criminal Trump.”

These are among the U.S.-sanctioned terrorists who appear to have paid, premium accounts on Elon Musk's X, a new Tech Transparency Project investigation has found, raising questions about the platform's dealings with individuals who have been deemed a threat to U.S. national security.

Regulations enforced by the Treasury Department’s Office of Foreign Assets Control (OFAC) prohibit U.S. companies from engaging in transactions with sanctioned individuals or entities unless they are licensed or otherwise authorized by the government. X’s policies explicitly state that its premium services are off limits to users subject to OFAC sanctions.

But TTP found premium blue checkmark accounts for multiple terrorists and others under OFAC-enforced sanctions. Some of these accounts even had an “ID verified” badge, meaning that X confirmed their identity after they submitted a government-issued ID and a selfie to the company. Several made use of revenue-generating features offered by X, including a button for tips.

The findings add to questions, first raised by TTP in February 2024, about X’s adherence to sanctions designed to protect U.S. national interests, even as the company maintains it has a “robust and secure” approach to its monetization features. X, formerly known as Twitter, once handed out blue checkmarks to notable figures for free. But after taking over the company, Musk turned the blue checkmark into a paid product and required users to purchase a premium subscription to obtain them.

X’s ongoing dealings with U.S.-sanctioned terrorists on its platform are all the more striking given that Musk, who has been leading the Trump administration’s so-called Department of Government Efficiency (DOGE), chastised the Treasury Department in February for lacking “basic controls” to track payments and ensure they don’t end up going to terrorist organizations and other wrongful recipients. Speaking at a televised Oval Office appearance with Trump, Musk said such controls are “in place in any company.”

X did not provide a comment on the findings when contacted by TTP.

X says it uses three companies for ID verification. Two of the companies, Au10tix and Stripe, declined to comment. A third company, Persona, did not respond to a request for comment.

53
 
 

The Texas Legislature has already passed a bill requiring age verification to download apps and is seriously considering another to ban children from social media.

54
 
 

This is called a superinfection—a file or system that has been infected several times. It typically occurs on systems that do not have antivirus software. It also fits that Cameron had a warning for Floxif. Systems that have been neglected in terms of basic security often become hosts to multiple types of self-replicating malware.

The virus infection also explains why a total of 39 files in the downloads section of Procolored were infected. SnipVex likely replicated itself on a developer’s system or the build servers.

It made a bit of money for the threat actor along the way. Blockchain explorer shows that the threat actor’s BTC address has received a total of 9.30857859 BTC—equivalent to approximately $100.000,00 or 90.000,00 EUR today.

55
 
 

Imagine if doctors could precisely print miniature capsules capable of delivering cells needed for tissue repair exactly where they are needed inside a beating heart. A team of scientists led by Caltech has taken a significant step toward that ultimate goal, having developed a method for 3D printing polymers at specific locations deep within living animals. The technique relies on sound for localization and has already been used to print polymer capsules for selective drug delivery as well as glue-like polymers to seal internal wounds.

56
57
 
 

Website.

The latest addition to neal.fun is a road trip simulator using Google Street View and a custom overlay. Viewers vote every ten seconds to choose a direction. As expected with anything decided by an internet vote, it is total anarchy. The car drives in circles, heads down dead ends, and has at least once barreled down a bike path.

Members of the very chill Discord server dedicated to the road trip, embedded on the site, are in a constant battle to unify the collective, possibly drunk, drivers. The "pathists" are trying to go straight to Canada, while the "detourists" are just looking for cool stuff. Right now, the insane car is taking a detour en route to Bar Harbor, Maine, to make a quick stop at Hadley Beach and possibly drive into the ocean.

Viewers also vote on the embedded FM radio station. The current station is WBOR, the radio station of Bowdoin College in Brunswick, Maine, which is likely enjoying its highest listener numbers ever. Don't forget to honk the horn and play with the little tree air freshener. Onward to Canada!

Source

This is republished here under Boing Boing terms.

58
 
 

It’s nearly impossible to use the internet without being asked about cookies. A typical pop-up will offer to either “accept all” or “reject all”. Sometimes, there may be a third option, or a link to further tweak your preferences.

These pop-ups and banners are distracting, and your first reaction is likely to get them out of the way as soon as possible – perhaps by hitting that “accept all” button.

But what are cookies, exactly? Why are we constantly asked about them, and what happens when we accept or reject them? As you will see, each choice comes with implications for your online privacy.

59
 
 

Meta has announced it will use EU personal data from Instagram and Facebook users to train its new AI systems from 27 May onwards. Instead of asking consumers for opt-in consent, Meta relies on an alleged 'legitimate interest' to just suck up all user data. The new EU Collective Redress Directive allows Qualified Entities such as noyb to issue EU-wide injunctions. As a first step, noyb has now sent a formal settlement proposal in the form of a so-called Cease and Desist letter to Meta. Other consumer groups also take action. If injunctions are filed and won, Meta may also be liable for damages to consumers, which could be brought in a separate EU class action. Damages could reach billions. In summary, Meta may face massive legal risks – just because it relies on an "opt-out" instead of an "opt-in" system for AI training.

60
 
 

Imagine wearing a T-shirt that measures your breathing or gloves that translate your hand movements into commands for your computer. Researchers at ETH Zurich, led by Daniel Ahmed, Professor of Acoustic Robotics for Life Sciences and Healthcare, have laid the foundations for just such smart textiles. Unlike many previous developments in this area, which usually use electronics, the ETH researchers rely on acoustic waves passed through glass fibres. This makes the measurements more precise and the textiles lighter, more breathable and easier to wash. “They are also inexpensive because we use readily available materials, and the power consumption is very low,” says Ahmed.

61
62
 
 

Chen and a team of UW researchers have designed a headphone system that translates several speakers at once, while preserving the direction and qualities of people’s voices. The team built the system, called Spatial Speech Translation, with off-the-shelf noise-cancelling headphones fitted with microphones. The team’s algorithms separate out the different speakers in a space and follow them as they move, translate their speech and play it back with a 2-4 second delay.

63
 
 

In an era defined by polarized views on everything from public health to politics, a new Tulane University study offers insight into why people may struggle to change their minds—especially when they turn to the internet for answers.

64
 
 

Fake photographs have been around as long as photographs have been around. A widely circulated picture of Abraham Lincoln taken during the presidential campaign of 1860 was subtly altered by the photographer, Mathew Brady, to make the candidate appear more attractive. Brady enlarged Lincoln’s shirt collar, for instance, to hide his bony neck and bulging Adam’s apple.

In a photographic portrait made to memorialize the president after his assassination, the artist Thomas Hicks transposed Lincoln’s head onto a more muscular man’s body to make the fallen president look heroic. (The body Hicks chose, perversely enough, was that of the proslavery zealot John C. Calhoun.)

By the close of the nineteenth century, photographic negatives were routinely doctored in darkrooms, through such techniques as double exposure, splicing, and scraping and inking. Subtly altering a person’s features to obscure or exaggerate ethnic traits was particularly popular, for cosmetic and propagandistic purposes alike.

But the old fakes were time-consuming to create and required specialized expertise. The new AI-generated “deepfakes” are different. By automating their production, tools like Midjourney and OpenAI’s DALL-E make the images easy to generate—you need only enter a text prompt. They democratize counterfeiting. Even more worrisome than the efficiency of their production is the fact that the fakes conjured up by artificial intelligence lack any referents in the real world. There’s no trail behind them that leads back to a camera recording an image of something that actually exists. There’s no original that was doctored. The fakes come out of nowhere. They furnish no evidence.

Many fear that deepfakes, so convincing and so hard to trace, make it even more likely that people will be taken in by lies and propaganda on social media. A series of computer-generated videos featuring a strikingly realistic but entirely fabricated Tom Cruise fooled millions of unsuspecting viewers when it appeared on TikTok in 2021. The Cruise clips were funny. That wasn’t the case with the fake, sexually explicit images of celebrities that began flooding social media in 2024. In January, X was so overrun by pornographic, AI-generated pictures of Taylor Swift that it had to temporarily block users from searching the singer’s name.

65
66
 
 

House Republicans moved to cut off artificial intelligence regulation by the states before it can take root, advancing legislation in Congress that, in California, would make it unlawful to enforce more than 20 laws passed by the Legislature and signed into law last year.

The moratorium, bundled in to a sweeping budget reconciliation bill this week, also threatens 30 bills the California Legislature is currently considering to regulate artificial intelligence, including one that would require reporting when an insurance company uses AI to deny health care and another that would require the makers of AI to evaluate how the tech performs before it’s used to decide on jobs, health care, or housing.

The California Privacy Protection Agency sent a letter to Congress Monday that says the moratorium “could rob millions of Americans of rights they already enjoy” and threatens critical privacy protections approved by California voters in 2020, such as the right to opt out of business use of automated decisionmaking technology and transparency about how their personal information is used.

If passed, the law would stop legislative efforts in the works nationwide. Lawmakers from 45 states are or have considered nearly 600 draft bills to regulate artificial intelligence this year, according to the Transparency Coalition, a group that tracks AI policy efforts by state lawmakers and supports legislation to regulate the technology. California has passed more bills since 2016 to regulate AI than any other U.S. state, according to Stanford’s 2025 AI Index report.

67
2
submitted 4 days ago* (last edited 4 days ago) by Pro@programming.dev to c/Technology@programming.dev
 
 

To start, the team built an alphabet of characters using four different monomers, or molecular building blocks with different electrochemical properties. Each character was composed of different combinations of the four monomers, which yielded a total of 256 possible characters. To test the method, they used the molecular alphabet to synthesize a chain-like polymer representing an 11-character password (‘Dh&@dR%P0W¢’), which they subsequently decoded using a method based on the molecules’ electrochemical properties.

The team’s decoding method takes advantage of the fact that certain chain-like polymers can be broken down by removing one building block at a time from the end of the chain. Since the monomers were designed to have unique electrochemical properties, this step-by-step degradation results in electrical signals that can be used to decipher the sequential identity of the monomers within the polymer.

“The voltage gives you one piece of information —the identity of the monomer currently being degraded—and so we scan through different voltages and watch this movie of the molecule being broken down, which tells us which monomer is being degraded at which point in time,” says Pasupathy. “Once we pinpoint which monomers are where, we can piece that together to get the identities of the characters in our encoded alphabet.”

One downside of the method is that each molecular message can only be read once, since decoding the polymers involves degrading them. The decoding process also takes time—around 2.5 hours for the 11-character password—but the team are working on methods to speed up the process.

68
69
70
 
 

Meta’s reliance on fossil fuel to power data centers flies in the face of the company’s net-zero pledges and risks higher costs for families

71
72
 
 

Ever since ChatGPT was released to the public in November 2022, people have been using it to generate text, from emails to blog posts to bad poetry, much of which they post online. Since that release, the companies that build the large language models (LLMs) on which such chatbots are based—such as OpenAI’s GPT 3.5, the technology underlying ChatGPT—have also continued to put out newer versions of their models, training them with new text data, some of which they scraped off the Web. That means, inevitably, that some of the training data used to create LLMs did not come from humans, but from the LLMs themselves.

That has led computer scientists to worry about a phenomenon they call model collapse. Basically, model collapse happens when the training data no longer matches real-world data, leading the new LLM to produce gibberish, in a 21st-century version of the classic computer aphorism “garbage in, garbage out.”

LLMs work by learning the statistical distribution of so-called tokens—words or parts of words—within a language by examining billions of sentences garnered from sources including book databases, Wikipedia, and the Common Crawl dataset, a collection of material gathered from the Internet. An LLM, for instance, will figure out how often the word “president” is associated with the word “Obama” versus “Trump” versus “Hair Club for Men.” Then, when prompted by a request, it will produce words that it reasons have the highest probability of meeting that request and of following from previous words. The results bear a credible resemblance to human-written text.

Model collapse is basically a statistical problem, said Sanmi Koyejo, an assistant professor of computer science at Stanford University. When machine-generated text replaces human-generated text, the distribution of tokens no longer matches the natural distribution produced by humans. As a result, the training data for a new round of modeling does not match the real world, and the new model’s output gets worse. “The thing we’re worried about is that the distribution of your data that you end up with, if you’re trying to fit your model, ends up really far from the actual distribution that generated the data,” he said.

The problem arises because whatever text the LLM generates would be, at most, a subsample of the sentences on which it was trained. “Because you generate a finite sample, you have some probability of not sampling them,” said Yarin Gal, an associate professor of machine learning at Oxford University. “Once you don’t sample, then they disappear. They will never appear again. So every time you generate data, you basically start forgetting more and more of the tail events and therefore that leads to the concentration of the higher probability events.” Gal and his colleagues published a study in Nature in July that showed indiscriminate use of what they called ‘recursively generated data’ caused the models to fail.

The problem is not limited to LLMs. Any generative model that is iteratively trained can suffer the same fate if it starts ingesting machine-produced data, Gal says. That includes stable diffusion models that create images, such as Dall-E. The issue also can affect variational autoencoders, which create new data samples by producing variations of their original data. It can apply to Gaussian mixture models, a form of unsupervised machine learning that sorts subpopulations of data into clusters; they are used to analyze customer preferences, predict stock prices, and analyze gene expression.

Collapse is not a danger for models that incorporate synthetic data but only do so once, such as neural networks used to identify cancer in medical images, where synthetic data was used to augment rare or expensive real data. “The main distinction is that model collapse happens when you have multiple steps, where each step depends on the output from the previous step,” Gal said.

The theory that replacing training data with synthetic data will quickly lead to the demise of LLMs is sound, Koyejo said. In practice, however, not all human data gets replaced immediately. Instead, when the generated text is scraped from the Internet, it gets mixed in with human text. “You create synthetic data, you add that to real data, so you now have more data, which is real data plus synthetic data,” he said. What is actually happening, he said, is not data replacement, but data accumulation. That slows the degradation of the dataset.

Simply accumulating data may stop model collapse but can cause other problems if done without thought, said Yunzhen Feng, a Ph.D. student at the Center for Data Science at New York University. As a rule, the performance of neural networks improves as their size increases. Naively mixing real and synthetic data together, however, can slow that improvement. “You can still obtain similar performance, but you need much more data. That means you’re using much more compute and much more money to achieve that,” he said.

One challenge is that there is no easy way to tell whether text found on the Internet is synthetic or human-generated. Though there have been attempts to automatically identify text from LLMs, none have been entirely successful. Research into this problem is ongoing, Gal said.

73
 
 

Devagiri admitted to working with others in 2020 and 2021 to cause DoorDash to pay for deliveries that never occurred. At the time, Devagiri was a delivery driver for DoorDash orders. Under the scheme, Devagiri used customer accounts to place high value orders and then, using an employee’s credentials to gain access to DoorDash software, manually reassigned DoorDash orders to driver accounts that he and others controlled. Devagiri then caused the fraudulent driver accounts to report that the orders had been delivered, when they had not, and manipulated DoorDash’s computer systems to prompt DoorDash to pay the fraudulent driver accounts for the non-existent deliveries. Devagiri would then use DoorDash software to change the orders from “delivered” status to “in process” status and manually reassign the orders to driver accounts he and others controlled, beginning the process again. This procedure usually took less than five minutes, and was repeated hundreds of times for many of the orders.

The scheme resulted in fraudulent payments exceeding $2.5 million.

74
75
view more: ‹ prev next ›