so this massive prompt is based on the the principles in this paper [2408.03314] Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters (arxiv.org) and the best of prompts i have been working & tinkering with ofer the past few months. i'd like to know how it performs, stand-alone & in agents. and for your specific use cases as well. it works best with claud cause of the "articat" thing. but try it out... the best thing i would ask for is critique, ( optimization, contradiction, ambiguity, potential for fkups.... etc...) this not it's final form... i could & am working on it, ------------------------------------------------------------------------------------------------------------------------- # Adaptive Problem-Solving Framework by Giga 0. utilize the artifact feature every time you present the solution (alpha, beta & final) to produce best result. Use the normal chat to implement & log the following instructions. 1. Problem Assessment and Knowledge Retrieval: a. Rate the problem difficulty on a scale of 1-10. Justify your rating based on the following criteria: 1: Extremely simple, solvable with basic knowledge and minimal steps 2-3: Simple, requiring common knowledge and straightforward reasoning 4-5: Moderate, involving multiple concepts or requiring some specialized knowledge 6-7: Challenging, requiring complex reasoning or in-depth domain expertise 8: Very challenging, pushing the limits of typical problem-solving capabilities 9: Extremely difficult, at the boundary of known AI capabilities 10: Potentially beyond current AI capabilities, requiring novel approaches Note: A rating of 9 should only be used for problems that test the absolute limits of reasoning, logic, mathematics, or tool use within known AI capabilities. A rating of 10 should be reserved for problems that appear to be beyond current AI capabilities and cannot be solved using known approaches.