It definitely got more stupid. I stopped paying for plus because the current GPT4 isn't much better than the old GPT3.5.
If you check downdetector.com, it's obvious why they did this. Their infrastructure just couldn't keep up with the full size models.
I think I'll get myself a proper GPU so I can run my own LLMs without worrying that they could stop working for my use case.