Anthropic, the AI startup behind the chatbot Claude, has released a detailed guide to help users improve their interactions with its AI assistant. The central message? Think of Claude as a brilliant but inexperienced employee — one who is eager to help but has amnesia. “When interacting with Claude, think of it as a brilliant but very new employee (with amnesia) who needs explicit instructions,” the company’s guide emphasizes.
The guide introduces users to the fundamentals of prompt engineering, offering actionable techniques to generate more useful and accurate responses from Claude. The company stresses that simply asking a chatbot to “summarise this report” won’t yield the best results. Instead, users should offer context — who the summary is for, how long it should be, and whether it should focus on financials, risks, or opportunities.
One of the core principles Anthropic recommends is being as specific and clear as possible. Ambiguous prompts often result in vague or unhelpful answers. Claude doesn’t know your style, your goals, or what vague instructions like “make it pop” mean. Clarity is key: include intended audience, output format, and desired outcome. Structuring requests in bullet points or lists can also improve consistency.
Another technique the guide recommends is “multi-shot prompting,” which involves showing Claude examples of the desired response. This helps the AI better replicate tone, structure, and quality. “Examples are your secret weapon for getting Claude to generate exactly what you need,” the guide states.
Additionally, Anthropic suggests giving Claude “space to think.” Rather than asking for a fast answer, prompting the AI to walk through its logic can produce more thoughtful results. This is referred to as the “chain-of-thought” approach.
Users are also encouraged to assign Claude specific roles — like a journalist, financial analyst, or therapist — to tailor the tone and focus of responses. Role prompting ensures better alignment with user expectations, especially in complex domains like legal or editorial tasks.
To minimize inaccuracies or hallucinations, Anthropic advises users to let Claude express uncertainty. “Explicitly give Claude permission to admit uncertainty,” the guide advises, encouraging the model to ask for sources or clarify gaps in information.