It's not really practical to train models on CPU. So basically your options are to get a GPU or to rent time on someone else's server that has the necessary compute. GPU VRAM seriously determines the sizes of model you can train, so renting a server is probably the more flexible option. It generally needs a lot more VRAM to train than to simply run a model.
Just an example of a place you can rent GPU time from: https://vast.ai/pricing
Even for a very powerful GPU like an A100 with 80GB VRAM the prices aren't that bad. It's like $2-3 per hour.