For decades, many computer scientists have presumed that for practical purposes, the outputs of good hash functions are generally indistinguishable from genuine randomness — an assumption they call the random oracle model.
Er, no. The falsity of this is taught in virtually all first year CS courses.
Computer programmers and other IT workers? Sure… but hash functions have never been considered a substitute fore pure randomness.
That’s why we have a random generator in each computer based on thermal variance, I/O input, and other actually random features. And even then, we have to be careful not to hash the randomness out of the source data.