Inference costs
With inference costs being substantial for LLMs (assuming people use the largest model available), I can imagine there being a monthly “intelligence” utility bill (similar to internet, electricity). Perhaps you “bring your own foundational model and API key” to different AI-powered services and get charged based on usage.
A lot of services could do well with a niche / smaller / more narrowly trained model, for which the inference cost would be negligible, but for more general use cases such as learning / search — I can imagine folks being fine with a ~$20/month bill for a very smart friend.
Even though it might spit out false information in a small % of cases, I’m a big fan of GPT-4 for general search / learning use cases. It’s replacing Google for me. Related tweet.