@ErikJonker Yes, but AZURE, which runs GPT, is ALSO a huge scaling machine, at least as large as Google Cloud.
It's true Google's dedicated Tensor processors / VPUs are more efficient for this task than generic GPUs, which probably does give them a slight edge relative to GPT.
But that still doesnt change the fundamental maths of this price point: The FLOPS cost is the FLOPS cost, no matter how efficiently you allocate it around. At same point you just have to pay it.