nfultz

joined 2 years ago
[–] nfultz@awful.systems 3 points 1 day ago

rokos basi-list

[–] nfultz@awful.systems 9 points 1 day ago (2 children)

From a new white paper Financing the AI boom: from cash flows to debt, h/t The Syllabus Hidden Gem of the Week

The long-term viability of the AI investment surge depends on meeting the high expectations embedded in those investments, with a disconnect between debt pricing and equity valuations. Failure to meet expectations could result in sharp corrections in both equity and debt markets. As shown in Graph 3.C, the loan spreads charged on private credit loans to AI firms are close to those charged to non-AI firms. If loan spreads reflect the risk of the underlying investment, this pattern suggests that lenders judge AI-related loans to be as risky as the average loan to any private credit borrower. This stands in stark contrast to the high equity valuations of AI companies, which imply outsized future returns. This schism suggests that either lenders may be underestimating the risks of AI investments (just as their exposures are growing significantly) or equity markets may be overestimating the future cash flows AI could generate.

Por que no los dos? But maybe the lenders are expecting a bailout... or just gullible...

That said, to put the macroeconomic consequences into perspective, the rise in AI-related investment is not particularly large by historical standards (Graph 4.A). For example, at around 1% of US GDP, it is similar in size to the US shale boom of the mid-2010s and half as large as the rise in IT investment during the dot-com boom of the 1990s. The commercial property and mining investment booms experienced in Japan and Australia during the 1980s and 2010s, respectively, were over five times as large relative to GDP.

Interesting point, if AI is basically a rounding error for GDP... But I also remember the layoffs in 2000-1 and 2014-5, they weren't evenly distributed and a lot of people got left behind, even if they weren't as bad as '08.

[–] nfultz@awful.systems 8 points 3 days ago

https://www.linkedin.com/posts/coquinn_generativeai-gartner-ibm-activity-7415515266849124352-W2n5

I’ve finally cracked how Gartner’s “Features” axis works.

It’s not latency.

It’s not context windows.

It’s definitely not “can this thing form a coherent thought.”

It’s Enterprise Friction™.

By that metric, Gartner has ranked IBM—a company whose flagship product is currently “billable hours in a trench coat”—ahead of Anthropic, the people who actually build the models IBM is desperately trying to resell with a logo swap.

Ranking IBM over Anthropic in 2025 is like ranking a library card catalog over Google Search because the library has better governance, stronger controls, and more shelves you can lock.

Anthropic is building the frontier.

IBM is building a PowerPoint about the frontier that requires a three-year commit, seven steering committees, and a ceremonial blood sacrifice to Red Hat.

Gartner analysts: blink twice if the blue suits are in the room with you.

[–] nfultz@awful.systems 4 points 4 days ago

nice find there:

A progressive campaign, "The Great Slate", was successful in raising funds for candidates in part by asking for contributions from tech workers in return for not posting similar quotes by Raymond. Matasano Security employee and Great Slate fundraiser Thomas Ptacek said, "I've been torturing Twitter with lurid Eric S. Raymond quotes for years. Every time I do, 20 people beg me to stop." It is estimated that, as of March 2018, over $30,000 has been raised in this way.[32]

Oh I saw that name before - https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/

[–] nfultz@awful.systems 4 points 5 days ago* (last edited 5 days ago)

Since someone linked to Bergstrom above, I wanted to mention his Marshack Colloquium talk from last year - https://www.youtube.com/watch?v=nxn40xiK9g0 - basically the idea is we are all "information foragers" but the "information environment" has shifted radically around us all in a really short amount of time. In "information abundance" the right strategy is to visit a lot more different sites instead of just a few, if the model / analogy works for people about as good as it does for ant eaters. If vibes are off, on to the next tab, it will broaden your worldview too.

[–] nfultz@awful.systems 11 points 5 days ago

PSA - https://consumer.drop.privacy.ca.gov/ - CA residents can now request data deletion to many adtech data brokers.

[–] nfultz@awful.systems 5 points 1 week ago

When I took my mom last week, they were blasting We Built This City.

[–] nfultz@awful.systems 6 points 1 week ago

https://securityaffairs.com/186460/ai/french-authorities-investigate-ai-undressing-deepfakes-on-x.html

French lawmakers Arthur Delaporte and Eric Bothorel alerted prosecutors on January 2 after thousands of non-consensual sexually explicit deepfakes were generated by Grok and shared on X. The Paris prosecutor’s office said the reports were added to an existing investigation into X, noting the offense carries penalties of up to two years in prison and a €60,000 fine.

two years of prison for whom exactly?

[–] nfultz@awful.systems 9 points 1 week ago

From the new Yann LeCunn interview https://www.ft.com/content/e3c4c2f6-4ea7-4adf-b945-e58495f836c2

Meta made headlines for trying to poach elite researchers from competitors with offers of $100mn sign-on bonuses. “The future will say whether that was a good idea or not,” LeCun says, deadpan.

LeCun calls Wang, who was hired to lead the organisation, “young” and “inexperienced”.

“He learns fast, he knows what he doesn’t know . . . There’s no experience with research or how you practise research, how you do it. Or what would be attractive or repulsive to a researcher.”

Wang also became LeCun’s manager. I ask LeCun how he felt about this shift in hierarchy. He initially brushes it off, saying he’s used to working with young people. “The average age of a Facebook engineer at the time was 27. I was twice the age of the average engineer.”

But those 27-year-olds weren’t telling him what to do, I point out.

“Alex [Wang] isn’t telling me what to do either,” he says. “You don’t tell a researcher what to do. You certainly don’t tell a researcher like me what to do.”

OR, maybe nobody /has/ to tell a researcher what to do, especially one like him, if they've already internalized the ideology of their masters.

[–] nfultz@awful.systems 14 points 1 week ago (4 children)

Anti-A.I.-relationship-sub r/cogsuckers maybe permanently locked down by its mods after users criticize mod-led change of the subreddit to a somewhat pro A.I.-sub (self.SubredditDrama)

The mods were heavily downvoted and critiqued for pulling the rug from under the community as well as for parallelly modding pro-A.I.-relationship-subs. One mod admitted:

"(I do mod on r/aipartners, which is not a pro-sub. Anyone who posts there should expect debate, pushback, or criticism on what you post, as that is allowed, but it doesn’t allow personal attacks or blanket comments, which applies to both pro and anti AI members. Calling people delusional wouldn’t be allowed in the same way saying that ‘all men are X’ or whatever wouldn’t. It’s focused more on a sociological issues, and we try to keep it from devolving into attacks.)"

A user, heavily upvoted, replied:

You’re a fucking mod on ai partners? Are you fucking kidding me?

It goes on and on like this: As of now, the posting has amassed 343 comments. Mostly, it's angry subscribers of the sub, while a few users from pro-A.I.-subreddits keep praising the mods. Most of the users agree that brigading has to stop, but don't understand why that means that a sub called COGSUCKERS should suddenly be neutral to or accepting of LLM-relationships. Bear in mind that the subreddit r/aipartners, for which one of the mods also mods, does not allow to call such relationships "delusional". The most upvoted comments in this shitstorm:

"idk, some pro schmuck decided we were hating too hard 💀 i miss the days shitposting about the egg" https://www.reddit.com/r/cogsuckers/comments/1pxgyod/comment/nwb159k/

[–] nfultz@awful.systems 11 points 1 week ago (1 children)

internet comment etiquette with erik just got off YT probation / timeout from when YouTube's moderation AI flagged a decade old video for having russian parkour.

He celebrated by posting the below under a pipebomb video.

Hey, this is my son. Stop making fun of his school project. At least he worked hard on it. unlike all you little fucks using AI to write essays about books you don't know how to read. So you can go use AI to get ahead in the workforce until your AI manager fires you for sexually harassing the AI secretary. And then your AI health insurance gets cut off so you die sick and alone in the arms of your AI fuck butler who then immediately cremates you and compresses your ashes into bricks to build more AI data centers. The only way anyone will ever know you existed will be the dozens of AI Studio Ghibli photos you've made of yourself in a vain attempt to be included. But all you've accomplished is making the price of my RAM go up for a year. You know, just because something is inevitable doesn't mean it can't be molded by insults and mockery. And if you depend on AI and its current state for things like moderation, well then fuck you. Also, hey, nice pipe bomb, bro.

 

Another response to Ptacek.

 

I found this seminar for spring quarter, does anyone have some suggested / related readings? Especially deep cuts or articles from the first AI winter.

view more: next ›