Current LLMs depend heavily on Chain-of-Thought prompting, an approach that often suffers from brittle task decomposition, immense training data demands and high latency. Inspired by the hierarchical ...