By "shield itself" that includes securing its power grid. It's not hard, it just takes a little foresight. Hence why humans are bad at it.
FaceDeer
And also the "EMP as technology kryptonite" trope.
If an AI is clever enough to enslave humanity it's clever enough to understand farraday cages.
Because none of the downsides listed in this article really matter for most projects. The fact that GitHub is owned by Microsoft doesn't magically give them rights over the code that they wouldn't have if it was hosted somewhere else.
though they are organizationally under the AG/DoJ.
This is exactly the problem the judges are looking to solve.
By sheer coincidence, I just came across a thread on Reddit about a system that's been invented for training AI speech models on languages when there's not enough actual recorded examples to serve as training data. Speech Instruction Training Without Speech for Low Resource Languages. ArXiv link to the paper for those who want to bypass Reddit, though the reddit link also has links to the actual models and code used.
Relevant to this thread.
Fortunately (at least from a brutally utilitarian perspective) Putin is still dead-set on the notion that if he bombs civilians enough he'll "break their spirit" and they'll just give up.
Far greater terror campaigns along those lines have been tried in the past and they don't succeed. So Putin is "wasting" those bombs. Yes, it's tragic and awful when a kindergarten gets blown up, but it doesn't impact Ukraine's warfighting capability and if anything it only strengthens Ukraine's resolve to never live under Russian domination again.
Meanwhile, Ukraine is surgically dismantling Russia's military industrial complex with their bombs. Far more efficient military use of resources, and has the added bonus of not being monstrously evil and therefore brings a lot more support from the world at large.
A significant drop, maybe once every year or two. The first time I cracked the screen I resolved to always use those protective cover thingies, and they seem to work since I've never cracked the screen since.
I don't understand why so many people are saying they drop them so frequently. These are expensive pieces of hardware and it's not hard to hold them securely.
I only just recently discovered that my installation of Whisper was completely unaware that I had a GPU, and was running entirely on my CPU. So even if you can't get a good LLM running locally you might still be able to get everything turned into text transcripts for eventual future processing. :)
It's a bit technical, I haven't found any pre-packaged software to do what I'm doing yet.
First I installed https://github.com/openai/whisper , the speech-to-text model that OpenAI released back when they were less blinded by dollar signs. I wrote a Python script that used it to go through all of the audio files in the directory tree where I'm storing this stuff and produced a transcript that I stored in a .json file alongside it.
For the LLM, I installed https://github.com/LostRuins/koboldcpp/releases/ and used the https://huggingface.co/unsloth/Qwen3-30B-A3B-128K-GGUF model, which is just barely small enough to run smoothly on my RTX 4090. I wrote another Python script that methodically goes through those .json files that Whisper produced, takes the raw text of the transcript, and feeds it to the LLM with a couple of prompts explaining what the transcript is and what I'd like the LLM to do with it (write a summary, or write a bullet-point list of subject tags). Those get saved in the .json file too.
Most recently I've been experimenting with creating an index of the transcripts using those LLM results and the Whoosh library in Python, so that I can do local searches of the transcripts based on topics. I'm building towards writing up something where I can literally tell it "Tell me about Uncle Pete" and it'll first search for the relevant transcripts and then feed those into the LLM with a prompt to extract the relevant information from them.
If you don't find the idea of writing scripts for that sort of thing literally fun (like me) then you may need to wait a bit for someone more capable and more focused than I am to create a user-friendly application to do all this. In the meantime, though, hoard that data. Storage is cheap.
If there's someone who speaks the language then it isn't lost yet.
I suppose it's interesting to muse about what it means for the last person to speak a language before it becomes lost, but that's still just one person so it's kind an abstract, academic concern.
Bear in mind, though, that the technology for dealing with these things are rapidly advancing.
I have an enormous amount of digital archives I've collected both from myself and from my now-deceased father. For years I just kept them stashed away. But about a year ago I downloaded the Whisper speech-to-text model from OpenAI and transcribed everything with audio into text form. I now have a Qwen3 LLM in the process of churning through all of those transcripts writing summaries of their contents and tagging them based on subject matter. I expect pretty soon I'll have something with good enough image recognition that I can turn loose on the piles of photographs to get those sorted out by subject matter too. Eventually I'll be able to tell my computer "give me a brief biography of Uncle Pete" and get something pretty good out of all that.
Yeah, boo AI, hallucinations, and so forth. This project has given me first-hand experience with what they're currently capable of and it's quite a lot. I'd be able to do a ton more if I wasn't restricting myself to what can run on my local GPU. Give it a few more years.
Trek fans being Trek fans, there's a page where every appearance of that book is matched to the page that was open in the particular episode it was seen in.