just the reference to the 'don't worry about it kitten' meme i think
AdrianTheFrog
I don't know of any games that use machine learning for procedural generation and would be slightly surprised if there are any. But there is a little bit of a distinction there because that is required at runtime, so it's not something an artist could possibly be involved in.
compressed to actually 1 kb as a jxl

https://drive.google.com/file/d/1GN8UKog4_NOJG-MHoWT0HQvw-HLLNb83/view?usp=sharing
lemmy won't let me upload jxl files unfortunately so here's a png version and a link to the jxl
edit: got it to exactly 1024 bytes and added text

https://drive.google.com/file/d/1T2ZQn1jg2LusyjkTT-Wxu2AiIBVRheGP/view?usp=sharing
Copy the link of the image. You see the bit at the end of the url that says ?format=webp? Change that to ?format=png.
Lemmy often doesn't show images in original quality unless specifically requested to.
Edit: which is fair, because the lossy webp is 51 kb vs 513 for the png. Compressed for longer, it could be a 265 kb lossless jxl though. Once Mozilla and Google finally add support (which is actually happening now!). It could also be a 322 kb lossless avif. All of these aren't max effort, just the effort that takes about 6 seconds on Image Toolbox on my phone
Lossily, avif > webp > mozjpeg > jxl > jpegli for this image, although I think this is just because jxl and jpegli use the same perpetual tuning method which must not favor dark areas. Which might be good for most images but certainly is terrible for this one. It certainly is much better at the bright areas. Mozjpeg vs jxl -> lossless webp (equivalent compressed size)


Note that all of the lossless formats would have been much smaller if the original screenshot in the mastodon post was lossless
One time my computer wouldn't boot, with the motherboard giving an error. It turned out that a bit of metal on the IO shield had gotten bent into the USB C port and was shorting some of the pins. I'm very glad there seem to be protections in place for at least some of these sorts of things lol
Motion blur in video games is usually a whole lot less accurate at what it's trying to approximate than averaging 4 frame generation frames would be. Although 4 frame generation frames would be a lot slower to compute than the approximations people normally make for motion blur.
Yes, motion blur in video games is just an approximation and usually has a lot of visible failure cases (disocclusion, blurred shadows, rotary blur sometimes). It obviously can't recreate the effect of a fast blinking light moving across the screen during a frame. It can be a pretty good approximation in the better implementations, but the only real way to 'do it properly' is by rendering frames multiple times per shown frame or rendering stochastically (not really possible with rasterization and obviously introduces noise). Perfect motion blur would be the average of an infinite number of frames over the period of time between the current frame and the last one. With path tracing you can do the rendering stochastically, and you need a denoiser anyways, so you can actually get very accurate motion blur. As the number of samples approaches infinity, the image approaches the correct one.
Some academics and nvidia researchers have recently coauthored a paper about optimizing path tracing to apply ReSTIR (technique for reusing information across multiple pixels and across time) to scenes with motion blur, and the results look very good (obviously still very noisy, I guess nvidia would want to train another ray reconstruction model for it). It's also better than normal ReSTIR or Area ReSTIR when there isn't motion blur apparently. It's relying on a lot of approximations too, so probably not quite unbiased path tracing quality if allowed to converge, but I don't really know.
https://research.nvidia.com/labs/rtr/publication/liu2025splatting/
But that probably won't be coming to games for a while, so we're stuck with either increasing framerates to produce blur naturally (through real or 'fake' frames), or approximating blur in a more fake way.
Frame generation is the only real odd-one-out here, the rest are using basically the same technique under the hood. I guess we don't really know exactly what ray reconstruction is doing since they've never released a paper or anything, but I think it combines DLSS upscaling with denoising basically, in the same pass.
DLSS Frame Generation actually uses the game's analytic motion vectors though instead of trying to estimate them (well, really it does both) so it is a whole lot more accurate. It's also using a fairly large AI model for the estimation, in comparison to TVs probably just doing basic optical flow or something.
If it's actually good though depends on if you care about latency and if you can notice the visual artifacts in the game you're using it for.
you can download the arch wiki on kiwix (for android), it's like 30 megabytes
No, I don't think so. There is cleanup required on the rails of course, but it's used fairly regularly in some places I think when the tracks are wet
A lot of trams carry sand that they can put on the rails to get more grip when they need to break really fast. That might be what happened there








Sure, I could definitely see situations where it would be useful, but I'm fairly confident that no current games are doing that. First of all, it is a whole lot easier said than done to get real-world data for that type of thing. Even if you manage to find a dataset with positions of various features across various biomes and train an AI model on that, in 99% of cases it will still take a whole lot more development time and probably be a whole lot less flexible than manually setting up rulesets, blending different noise maps, having artists scatter objects in an area, etc. It will probably also have problems generating unusual terrain types, which is a problem if the game is set in a fantasy world with terrain that is unlike what you would find in the real world. So then, you'd need artists to come up with a whole lot of datat to train the model with, when they could just be making the terrain directly. I'm sure Google DeepMind or Meta AI whatever or some team of university researchers could come up with a way to do ai terrain generation very well, but game studios are not typically connected to those sorts of people, even if they technically are under the same company of Microsoft or Meta.
You can get very far with conventional procedural generation techniques, hydraulic erosion, climate simulation, maybe even a model of an ecosystem. And all of those things together would probably still be much more approvable for a game studio than some sort of machine learning landscape prediction.