ZickZack

joined 2 years ago
[–] ZickZack@kbin.social 13 points 2 years ago (3 children)

It really depends on what you want: I really like obsidian which is cross-platform and uses basically vanilla markdown which makes it easy to switch should this project go down in flames (there are also plugins that add additional syntax which may not be portable, but that's as expected).

There's also logseq which has much more bespoke syntax (major extensions to markdown), but is also OSS meaning there's no real danger of it suddenly vanishing from one day to the next.
Specifically Logseq is much heavier than obsidian both in the app itself and the features it adds to markdown, while obsidian is much more "markdown++" with a significant part of the "++" coming from plugins.

In my experience logseq is really nice for short-term note taking (e.g. lists, reminders, etc) and obsidian is much nicer for long-term notes.

Some people also like notion, but i never got into that: it requires much more structure ahead of time and is very locked down (it also obviously isn't self-hosted). I can see notion being really nice for people that want less general note-taking and more custom "forms" to fill out (e.g. traveling checklists, production planning, etc..).

Personally, I would always go with obsidian, just for the piece of mind that the markdown plays well with other markdown editors which is important for me if I want a long-running knowledge base.
Unfortunately I cannot tell you anything with regards to collaboration since I do not use that feature in any note-taking system

[–] ZickZack@kbin.social 9 points 2 years ago (1 children)

Should have been done a long time ago. Even adding and removing gravel traps where they currently have the blue concrete should be within the realms of possibility for an F1 gp if they want both F1 and MotoGP (consider that places like Baku literally pave their historical cobblestone and then un-pave it after the gp)

[–] ZickZack@kbin.social 4 points 2 years ago

For example, if you had an 8-bit integer represented by a bunch of qbits in a superposition of states, it would have every possible value from 0-256 and could be computed with as though it were every possible value at once until it is observed, the probability wave collapses, and a finite value emerges. Is this not the case?

Not really, or at least it's not a good way of thinking about it. Imagine it more like rigging coin tosses: You don't have every single configuration at the same time, but rather you have a joint probability over all bits which get altered to produce certain useful distributions.
To get something out, you then make a measurement that returns the correct result with a certain probability (i.e. it's a probabilistic turing machine rather than a nondeterministic one).

This can be very useful since sampling from a distribution can sometimes be much nicer than actually solving a problem (e.g. you replace a solver with a simulator of the output).
In traditional computing this can also be done but that gives you the fundamental problem of sampling from very complex probability distributions which involves approximating usually intractable integrals.

However, there are also massive limitations to the type of things a quantum computer can model in this way since quantum theory is inherently linear (i.e. no climate modelling regardless of how often people claim they want to do it).
There's also the question of how many things exist where it is more efficient to build such a distribution and sample from it, rather than having a direct solver.
If you look at the classic quantum algorithms (e.g. https://en.wikipedia.org/wiki/Quantum_algorithm), you can see that there aren't really that many algorithms out there (this is of course not an exhaustive list but it gives a pretty good overview) where it makes sense to use quantum computing and pretty much all of them are asymptotically barely faster or the same speed as classical ones and most of them rely on the fact that the problem you are looking at is a black-box one.

Remember that one of the largest useful problems that was ever solved on a quantum computer up until now was factoring the number 21 with a specialised version of Shor's algorithm that only works for that number (since the full shor would need many orders of magnitude more qbits than exist on the entire planet).

There's also the problem of logical vs physical qbits: In computer science we like to work with "perfect" qbits that are mathematically ideal, i.e. are completely noise free. However, physical qbits are really fragile and attenuate to pretty much anything and everything, which adds a lot of noise into the system. This problem also gets worse the larger you scale your system.

The latter is a fundamental problem: the entire clue of quantum computers is that you can combine random states to "virtually" build a complex distribution before you sample from it. This can be much faster since the virtual model can look dependencies that are intractable to work with on a classical system, but that dependency monster also means that any noise in the system is going to negatively affect everything else as you scale up to more qbits.
That's why people expect real quantum computers to have many orders of magnitude more qbits than you would theoretically need.

It also means that you cannot trivially scale up a physical quantum algorithm: Physical grovers on a list with 10 entries might look very different than a physical grover with 11 entries.
This makes quantum computing a nonstarter for many problems where you cannot pay the time it takes to engineer a custom solution.
And even worse: you cannot even test whether your fancy new algorithm works in a simulator, since the stuff you are trying to simulate is specifically the intractable quantum noise (something which, ironically, a quantum computer is excellent at simulating).

In general you should be really careful when looking at quantum computing articles, since it's very easy to build some weird distribution that is basically impossible for a normal computer to work with, but that doesn't mean it's something practical e.g. just starting the quantum computer, "boop" one bit, then waiting for 3ns will give you a quantum noise distribution that is intractable to simulate with a computer (same thing is true if you don't do anything with a computer: there's literal research teams of top scientists whose job boils down to "what are quantum computers computing if we don't give them instructions").

Meanwhile, the progress of classical or e.g. hybrid analog computing is much faster than that of quantum computing, which means that the only people really deeply invested into quantum computing are the ones that cannot afford to miss, just in case there is in fact something:

  • finance
  • defence
  • security
  • ....
[–] ZickZack@kbin.social 2 points 2 years ago

I can just go to the search tab and look for the magazine (e.g. Search for retro gaming) and find an the other instances.
I think s fair number of people forget to switch the search to magazines before looking (or are actually subscribing to other instances but don't notice it)

[–] ZickZack@kbin.social 6 points 2 years ago (4 children)

That depends on the size of the insurance: keep in mind that, for the most part, kbin is just a list of txt files. 2gb of ram sounds like a lot less than it is since people are used to desktops that have all sorts of additional stuff running on the side which pushes up the overall system consumption

[–] ZickZack@kbin.social 2 points 2 years ago

I see no indication that this was a to down forced decision from management (just from having talked to some developers at Gamescom a couple of years ago).
The concept really wasn't horrible it just looks like it now having seen the product, but a stealth have themed after Gollum is not a dumb idea.
There's lots of stuff you could do, like e.g. use the ring for temporary invisibility but at the cost of losing some e.g. sanity resource you need to recover.

The problem with this game is that the idea being bad doesn't even really factor into its quality since just the actual bare-bones graphics and fundamental gameplay is so broken that the lack of original ideas isn't really a factor.

If this was just a no-thrills e.g. thief clone with a Gollum skin, nobody would bar an eye. The problem is that even this low bar of "some stealth game+Gollum" is not reached.

In fact, we have a very direct comparison to a different "Gollum like stealth have produced by an indie developer" that was a smash hit: "Styx: master of shadows" is a climbing based stealth have featuring a small green goblin like protagonist that has to deal with a powerful but risky to use substance.

[–] ZickZack@kbin.social 13 points 2 years ago (2 children)

They choose to do this. Delicious has historically been a point and click developer, but they wanted to diversify, especially since their previous title "pillars of the earth" flopped. They first tried their have at rts with "a year of rain" which is simply not that good, and then looked into Gollum.
You also can't raid make the argument that the project was rushed out the door, considering the game was supposed to release in 2021 (two years ago).

They tried something they had no experience in, not through coercion but because they wanted to, and produced a game of shockingly low quality. Since this wasn't the first flop, but just the latest in a huge series of flops, (though it was the most expensive and high profile one) the studio closed.

[–] ZickZack@kbin.social 8 points 2 years ago

Peertube is inherently very scalable with relatively little cost due to an artifact of all social media platforms: Most of the traffic is driven by a tiny amount of videos/magazines/etc...

For services like youtube, you can use this as a way to quickly cache data close to the place it's going to be streamed: e.g. Netflix works with ISPs to install small servers at their locations to lessen the burden on their (and the ISPs) systems.
But with centralised systems you can only push this so far since ultimately everything is still concentrated at one central location.

Hypothetically, if you could stop this super-linear scaling for each user (you need to pay per user plus overhead generated from managing them at scale), you could easily compete against the likes of youtube simply because, at sufficient scale, all the other effects get ammortized away.

Peertube does exactly this by serving the videos as webtorrents: essentially this means that for every "chunk" of a video you downloaded, you also host that chunk for other people to download. That means that peertube itself theoretically only has to host every unique video once (or less than once since the chunks are in the network for a while), meaning you rid yourself of the curse of linear user scaling against users and only scale sub-linearly with the number of unique videos (how sub-linear depends on the lifetime for your individual torrents; i.e. how long a single video chunk stays available for others).

The costs that remain for every peertube instance is essentially the file hosting costs (and encoding the video, but that also only scales in the number of videos and could be pushed onto the uploader using WASM video encoders).
Storage itself isn't cheap, but also not ungodly expensive (especially since you can ammortize the costs over a long time as you platform grows with storage prices in a continual massive decline).

Platforms like Netflix and youtube cannot do this because

  1. Netflix is a paid-service and people don't want to do the hosting job for netflix after having already paid for the service
  2. Youtube has to serve adds which is incompatible with the "users host the content" method

In general torrenting is a highly reliable and well tested method that scales fantastically well to large data needs (it quite literally becomes more efficient the more people use it)

[–] ZickZack@kbin.social 2 points 2 years ago

Just as a quick check: are you sure you are in your "subscribed" view?
KBIN by default uses an "all" view, which you can change at the top right next to your username (the "table" menu).

[–] ZickZack@kbin.social 13 points 2 years ago (1 children)

And don't forget that even after that you still have to watch baked-in "This video is sponsored by <insert shady company here>" adds since the actual revenue that gets passed to creators from youtube is so low that to keep the ship afloat they have to look for additional revenue streams.

[–] ZickZack@kbin.social 46 points 2 years ago (11 children)

Go to the relevant domain's front page (e.g https://kbin.social/d/kbin.social for kbin.social).
The URL scheme is "https://kbin.social/d/DOMAINHERE" assuming you are currently on kbin.social.
On the right in the sidebar you can see "Domain" and below that options to subscribe or to block.
Really it's the same thing as magazines, just that you generally don't visit the domain itself.

[–] ZickZack@kbin.social 13 points 2 years ago

While the inability to source is a huge problem, but you also have to keep in mind that complaining about AI has other objective beyond the obvious "AI bad".

  • it's marketing: "Our thing is so powerful it could irreparably change someone's life" is still advertising even if that irreparable change is bad. Saying "AI so powerful it's dangerous" just sounds less advertis-y than "AI so powerful you cannot not invest in it" despite both leading to similar conclusions (you can look back at the "fearvertising" done during the original AI boom: same paint, different color)
  • it's begging for regulatory zeals to be put into place: Everyone with a couple of millions can build an LLM from scratch. That might sound like a lot, but it's only getting cheaper and it doesn't need highly intricate systems to replicate. Specifically the ability to finetune a large model with few datapoints allows even open-source non-profits like OpenAssistant to compete against the likes of google and openai: Google has made that very explicit in their leaked We have no moat memo. This is why you see people like Sam Altman talking to congress about the dangers of AI: He has no serious competetive advantage and hopes that with sufficient fear-mongering he can get the government to give him one.

Complaining about AI is as much about the AI as it is about the economical incentives behind AI.

view more: ‹ prev next ›