gerikson

joined 2 years ago
[–] gerikson@awful.systems 14 points 7 hours ago (1 children)

Except the people selling expensive PCs.

[–] gerikson@awful.systems 6 points 15 hours ago* (last edited 14 hours ago) (1 children)

Good news everyone, there's 2 bonkers pieces about the stars and the galaxy on LW right now!

Here's a dude very worried about how comets impacting the sun could cause it to flare and scorch the earth. Nothing but circumstantial evidence, and GenAI researched to boot. Appeared in the EA forum as part of their "half-baked ideas" amnesty

https://www.lesswrong.com/posts/9gAksZ25wbvfS8FAT/a-new-global-risk-large-comet-s-impact-on-sun-could-cause

The only thing I'd note about this is that even if the comet strikes along the plane of eliptic (not an unreasonable assumption), the planet would still have to be exactly in the right place for this assumed plume of energy to do any damage. And if it hits the Sahara or the Pacific, NBD presumably.

(Edit turns out the above is just the abstract, the full piece is here:

https://docs.google.com/document/d/1OHgc7Q4git6OfDNTE_TDf9fFNgrEEnCUfnPMIwbK3vg/edit?usp=sharing)

Then there's this person looking really far ahead into how to get energy from the universe

https://www.lesswrong.com/posts/YC4L5jxHnKmCDSF9W/some-astral-energy-extraction-methods

Tying galaxies together: Anchor big rope to galaxies as they get pulled apart by dark matter. Build up elastic potential energy which can be harvested. Issue: inefficient. [...] Not clear (to me) how you anchor rope to the galaxies.

Neutrino capture: Lots of neutrinos running around, especially if you use hawking radiation to capture mass energy of black holes. So you might want to make use of them. But neutrinos are very weakly interacting, so you need dense matter to absorb their energy/convert them to something else. Incredibly dense. To stop one neutrino with lead you need 1 lightyear of matter, with a white dwarf you need an astronomical unit, and for a neutron star (10^17 kg/m^3 density, 10km radium) you need 340 meters of matter. So neutrino capture is feasible,

(my emphasis)

Black Hole Bombs: Another interesting way of extracting energy from black holes are superradiant instabilities, i.e. making the black hole into a bomb. You use light to extract angular momentum from the blackhole, kinda like the Penrose process, and get energy out. With a bunch of mirrors, you can keep reflecting the light back in and repeat the process. This can produce huge amounts of energy quickly, on the order of gamma ray bursts for stellar mass black holes. Or if you want it to be quicker, you can get 1% of the blackholes mass energy out in 13 seconds. How to collect this is unclear.

(again, my emphasis)

Same author has a recent post titled "Don't Mock Yourself". Glad to see they've taken this advice to heart and outsourced the mocking.

[–] gerikson@awful.systems 8 points 1 day ago (2 children)

"Why is LessWrong awesome and it's because we're prepared to take racism seriously isn't it"

https://www.lesswrong.com/posts/HZrqTkTCgnFhEgxvQ/what-is-lesswrong-good-for

The focussing on Covid is weird seeing that AFAIK basically everyone who knew anything about pandemics were sounding the alarm at the same time that (some) rats and techbros were trying to corner the market in protective gear.

[–] gerikson@awful.systems 2 points 3 days ago

Shoulda used Grok

[–] gerikson@awful.systems 4 points 4 days ago* (last edited 4 days ago)

“I’d already been ChatGPT-ed into bed at least once. I didn’t want it to happen again.”

According to a 2024 YouGov poll, for instance, around half of Americans aged 18-34 reported having been, like Holly, in a situationship (a term it defines as “a romantic connection that exists in a gray area, neither strictly platonic nor officially a committed relationship”).

“Over the course of a week, I realised I was relying on it quite a lot,” she says. “And I was like, you know what, that’s fine – why not outsource my love life to ChatGPT?”

She describes being on the receiving end of the kinds of techniques that Jamil uses – being drilled with questions, “like you’re answering an HR questionnaire”, then off the back of those answers “having conversations where it feels as if the other person has a tap on my phone because everything they say is so perfectly suited to me”.

[–] gerikson@awful.systems 5 points 4 days ago

Credit where credit is due, I found this via HN: https://news.ycombinator.com/item?id=45552565

Also per https://futurism.com/future-society/ai-data-centers-finances, author is Harris “Kuppy” Kupperman, founder of the hedge fund in question.

[–] gerikson@awful.systems 10 points 5 days ago (2 children)

An investor runs the numbers of AI capex and is not impressed

(n.b. I have no idea who this guy is or his track record (or even if he's a dude) but I think the numbers check out and the parallells to railroads in the 19th century are interesting too)

Global Crossing Is Reborn…

Now, I think AI grows. I think the use-cases grow. I think the revenue grows. I think they eventually charge more for products that I didn’t even know could exist. However, $480 billion is a LOT of revenue for guys like me who don’t even pay a monthly fee today for the product. To put this into perspective, Netflix had $39 billion in revenue in 2024 on roughly 300 million subscribers, or less than 10% of the required revenue, yet having rather fully tapped out the TAM of users who will pay a subscription for a product like this. Microsoft Office 365 got to $ 95 billion in commercial and consumer spending in 2024, and then even Microsoft ran out of people to sell the product to. $480 billion is just an astronomical number.

Of course, corporations will adopt AI as they see productivity improvements. Governments have unlimited capital—they love overpaying for stuff. Maybe you can ultimately jam $480 billion of this stuff down their throats. The problem is that $480 billion in revenue isn’t for all of the world’s future AI needs, it’s the revenue simply needed to cover the 2025 capex spend. What if they spend twice as much in 2026?? What if you need almost $1 trillion in revenue to cover the 2026 vintage of spend?? At some point, you outrun even the government’s capacity to waste money (shocking!!)

An AI Addendum

As a result, my blog post seems to have elicited a liberating realization that they weren’t alone in questioning the math—they’ve just been too shy to share their findings with their peers in the industry. I’ve elicited a gnosis, if you will. As this unveiling cascaded, and they forwarded my writings to their friends, an industry simultaneously nodded along. Personal self-doubts disappeared, and high-placed individuals reached out to share their epiphanies. “None of this makes sense!!” “We’ll never earn a return on capital!!” “We’ve been wondering the same thing as you!!”

[...]

Remember, the industry is spending over $30 billion a month (approximately $400 billion for 2025) and only receiving a bit more than a billion a month back in revenue. The mismatch is astonishing, and this ignores that in 2026, hundreds of billions of additional datacenters will get built, all needing additional revenue to justify their existence. Adding the two years together, and using the math from my prior post, you’d need approximately $1 trillion in revenue to hit break even, and many trillions more to earn an acceptable return on this spend. Remember again, that revenue is currently running at around $15 to $20 billion today.

[–] gerikson@awful.systems 1 points 5 days ago

He wanted Nazi Germany with Milton Friedman, he'll get Argentina without a USA to bail it out.

[–] gerikson@awful.systems 7 points 6 days ago

Not really, Schmitt is the real villian

I found it really interesting and scary

https://archive.ph/ZdXIS

[–] gerikson@awful.systems 6 points 1 week ago

I read the disucssions and while I can sympathize with jwz's ridicule of barely covered raw git, I think he's asking for advice in the wrong place. There must be tons of solutions for running a small business with lots of part timers ,like a nightclub.

[–] gerikson@awful.systems 8 points 1 week ago

I think one important through-line is that both pencil-necked computer fondlers and muscle-bound roid ragers can and do hate women.

[–] gerikson@awful.systems 6 points 1 week ago

Surprisingly still up on lobsters and the comments aren't totally horrible.

 

current difficulties

  1. Day 21 - Keypad Conundrum: 01h01m23s
  2. Day 17 - Chronospatial Computer: 44m39s
  3. Day 15 - Warehouse Woes: 30m00s
  4. Day 12 - Garden Groups: 17m42s
  5. Day 20 - Race Condition: 15m58s
  6. Day 14 - Restroom Redoubt: 15m48s
  7. Day 09 - Disk Fragmenter: 14m05s
  8. Day 16 - Reindeer Maze: 13m47s
  9. Day 22 - Monkey Market: 12m15s
  10. Day 13 - Claw Contraption: 11m04s
  11. Day 06 - Guard Gallivant: 08m53s
  12. Day 08 - Resonant Collinearity: 07m12s
  13. Day 11 - Plutonian Pebbles: 06m24s
  14. Day 18 - RAM Run: 05m55s
  15. Day 04 - Ceres Search: 05m41s
  16. Day 23 - LAN Party: 05m07s
  17. Day 02 - Red Nosed Reports: 04m42s
  18. Day 10 - Hoof It: 04m14s
  19. Day 07 - Bridge Repair: 03m47s
  20. Day 05 - Print Queue: 03m43s
  21. Day 03 - Mull It Over: 03m22s
  22. Day 19 - Linen Layout: 03m16s
  23. Day 01 - Historian Hysteria: 02m31s
 

Problem difficulty so far (up to day 16)

  1. Day 15 - Warehouse Woes: 30m00s
  2. Day 12 - Garden Groups: 17m42s
  3. Day 14 - Restroom Redoubt: 15m48s
  4. Day 09 - Disk Fragmenter: 14m05s
  5. Day 16 - Reindeer Maze: 13m47s
  6. Day 13 - Claw Contraption: 11m04s
  7. Day 06 - Guard Gallivant: 08m53s
  8. Day 08 - Resonant Collinearity: 07m12s
  9. Day 11 - Plutonian Pebbles: 06m24s
  10. Day 04 - Ceres Search: 05m41s
  11. Day 02 - Red Nosed Reports: 04m42s
  12. Day 10 - Hoof It: 04m14s
  13. Day 07 - Bridge Repair: 03m47s
  14. Day 05 - Print Queue: 03m43s
  15. Day 03 - Mull It Over: 03m22s
  16. Day 01 - Historian Hysteria: 02m31s
 

The previous thread has fallen off the front page, feel free to use this for discussions on current problems

Rules: no spoilers, use the handy dandy spoiler preset to mark discussions as spoilers

 

This season's showrunners are so lazy, just re-using the same old plots and antagonists.

 

“It is soulless. There is no personality to it. There is no voice. Read a bunch of dialogue in an AI generated story and all the dialogue reads the same. No character personality comes through,” she said. Generated text also tends to lack a strong sense of place, she’s observed; the settings of the stories are either overly-detailed for popular locations, or too vague, because large language models can’t imagine new worlds and can only draw from existing works that have been scraped into its training data.

 

The grifters in question:

Jeremie and Edouard Harris, the CEO and CTO of Gladstone respectively, have been briefing the U.S. government on the risks of AI since 2021. The duo, who are brothers [...]

Edouard's website: https://www.eharr.is/, and on LessWrong: https://www.lesswrong.com/users/edouard-harris

Jeremie's LinkedIn: https://www.linkedin.com/in/jeremieharris/

The company website: https://www.gladstone.ai/

 

HN reacts to a New Yorker piece on the "obscene energy demands of AI" with exactly the same arguments coiners use when confronted with the energy cost of blockchain - the product is valuable in of itself, demands for more energy will spur investment in energy generation, and what about the energy costs of painting oil on canvas, hmmmmmm??????

Maybe it's just my newness antennae needing calibrating, but I do feel the extreme energy requirements for what's arguably just a frivolous toy is gonna cause AI boosters big problems, especially as energy demands ramp up in the US in the warmer months. Expect the narrative to adjust to counter it.

view more: next ›