With increased compute for training and sophisticated algorithms being developed by all big AI labs(OAI, GDM, Anthropic etc), LLMs still fall short of original creative reasoning aka superintelligence. Almost all LLMs in production use a Mixture of Experts (MoE) approach with a gatekeeper that routes queries to the correct specialized sub-model.
For more complicated problems, what if we use LLMs from different labs and build an ochestration layer (also another more generalized LLM) as the gatekeeper?
Stay tuned.