Claude API for Beginners: Your First API Call in 10 Minutes
The Claude API lets you build applications with Claude — chatbots, coding assistants, content tools, data extractors, and agents. Getting started takes 10 minutes: create an Anthropic account, get an API key, install the SDK, and send your first message. This guide walks through each step with complete code.
Step 1: Get an API key
- Go to console.anthropic.com
- Create an account (email + password, or Google)
- Add payment information (you won't be charged until you use the API)
- Navigate to API Keys → Create Key
- Copy the key immediately — you won't see it again
API key format: sk-ant-api03-... (starts with sk-ant-)
Security rule: never commit your API key to git. Never share it. Store it as an environment variable.
Step 2: Install the SDK
Python:
pip install anthropic
# or
pip install anthropic[bedrock,vertex] # if using AWS or GCP
Node.js / TypeScript:
npm install @anthropic-ai/sdk
# or
bun add @anthropic-ai/sdk
Step 3: Set your API key
macOS/Linux (add to ~/.zshrc or ~/.bashrc for persistence):
export ANTHROPIC_API_KEY="sk-ant-..."
Windows (Command Prompt):
set ANTHROPIC_API_KEY=sk-ant-...
Python project (.env file):
ANTHROPIC_API_KEY=sk-ant-...
Then load with python-dotenv:
from dotenv import load_dotenv
load_dotenv()
Never put the key directly in your code — it's easy to accidentally commit to git.
Step 4: Your first API call
Python:
import anthropic
client = anthropic.Anthropic()
# The client automatically reads ANTHROPIC_API_KEY from your environment
message = client.messages.create(
model="claude-haiku-4-5", # Cheapest model for learning
max_tokens=1024,
messages=[
{"role": "user", "content": "Hello, Claude! What's 2 + 2?"}
]
)
print(message.content[0].text)
# Output: "2 + 2 equals 4."
TypeScript:
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic();
const message = await client.messages.create({
model: "claude-haiku-4-5",
max_tokens: 1024,
messages: [
{ role: "user", content: "Hello, Claude! What's 2 + 2?" }
],
});
console.log(message.content[0].text);
Run it:
python first_call.py
# or
npx ts-node first_call.ts
You should see Claude's response printed. If you see an AuthenticationError, your API key isn't set correctly.
Understanding the request structure
Every API call has the same structure:
client.messages.create(
model="...", # Which Claude model to use
max_tokens=1024, # Maximum length of Claude's response
system="...", # Optional: instructions for Claude's behavior (system prompt)
messages=[ # The conversation history
{"role": "user", "content": "..."}, # User messages
{"role": "assistant", "content": "..."}, # Claude's previous responses
{"role": "user", "content": "..."}, # Another user message
]
)
model: which Claude to use. Start with claude-haiku-4-5 (cheapest, fastest).
max_tokens: maximum tokens in Claude's response. 1024 = ~750 words. Set higher for long responses.
system: tells Claude its role and behavior. Optional but powerful.
messages: the conversation. Always starts and ends with a user message.
The response structure
message = client.messages.create(...)
print(message.content[0].text) # The response text
print(message.model) # Which model was used
print(message.stop_reason) # Why Claude stopped: "end_turn" or "max_tokens"
print(message.usage.input_tokens) # Tokens in your request
print(message.usage.output_tokens) # Tokens in Claude's response
Multi-turn conversations (chat)
To have a back-and-forth conversation, pass the full history each time:
import anthropic
client = anthropic.Anthropic()
conversation = []
def chat(user_message: str) -> str:
"""Send a message and get a response, maintaining conversation history."""
conversation.append({"role": "user", "content": user_message})
response = client.messages.create(
model="claude-haiku-4-5",
max_tokens=1024,
messages=conversation,
)
assistant_message = response.content[0].text
conversation.append({"role": "assistant", "content": assistant_message})
return assistant_message
# Example conversation
print(chat("My name is Alex."))
print(chat("What's my name?"))
# Claude will correctly answer "Alex"
The Claude API is stateless — it doesn't remember previous conversations. You're responsible for passing the history. This is why you build the conversation list and pass it every time.
Adding a system prompt
A system prompt gives Claude standing instructions that apply to all messages:
response = client.messages.create(
model="claude-haiku-4-5",
max_tokens=1024,
system="You are a helpful coding assistant. When you write code, always include error handling. Explain your code in plain English after showing it.",
messages=[{"role": "user", "content": "Write a function to read a JSON file"}]
)
The system prompt persists across the conversation — you don't need to repeat it every turn.
Understanding tokens and pricing
Tokens: Claude uses tokens (roughly 1 token ≈ 4 characters). "Hello, how are you?" is about 5 tokens.
Pricing (Claude Haiku 4.5 as of April 2026):
- Input tokens: $0.80 per million
- Output tokens: $4.00 per million
Cost per request: for a typical 500-token input + 200-token output:
- Input: 500 × $0.80 / 1,000,000 = $0.0004
- Output: 200 × $4.00 / 1,000,000 = $0.0008
- Total:
$0.0012 per request ($1.20 per 1,000 requests)
At this pricing, 10,000 requests per day costs about $12/day. For a learning project with hundreds of requests, cost is negligible.
Common errors and fixes
AuthenticationError: API key not found or invalid.
Fix: verify ANTHROPIC_API_KEY is set in your environment. Print it with print(os.environ.get("ANTHROPIC_API_KEY")) — it should show your key.
RateLimitError: too many requests or tokens per minute.
Fix: wait a minute and retry. For production, add retry logic. New accounts start with low rate limits.
BadRequestError: messages: first message must use the user role:
Fix: the messages list must start with a user message. You can't start with an assistant message.
max_tokens error in response: response was cut off because it hit max_tokens.
Fix: increase max_tokens. For long responses, use 4096 or higher.
Frequently asked questions
Is there a free tier for the Claude API? No free API tier as of April 2026. You pay for what you use. There's no monthly fee — you're charged per token. New accounts get a small credit for initial testing.
What model should a beginner use?
Start with claude-haiku-4-5 to keep costs near zero while learning. Switch to claude-sonnet-4-5 when you need better quality for complex tasks.
How is the Claude API different from claude.ai? Claude.ai is the consumer chat interface. The Claude API is for developers building applications. The API doesn't have built-in memory, a web interface, or Claude.ai's special features (projects, custom instructions). You build all of that yourself.
Do I need to understand machine learning to use the API? No. The API is a standard REST API (with SDKs for Python and TypeScript). If you can make HTTP requests or use an SDK, you can use the API. Machine learning knowledge is not required.
What's the difference between the Anthropic API and other AI APIs? All major AI providers (Anthropic, OpenAI, Google) offer similar API structures. The Claude API has a particularly clean design and Claude's instruction-following makes it well-suited for applications requiring precise output. See the Claude vs GPT-4o comparison for detailed differences.
Related guides
- How to Write System Prompts for Claude — making your application behave consistently
- Claude JSON Structured Output: Getting Reliable JSON Every Time — extracting structured data from Claude responses
Take It Further
Claude Agent SDK Cookbook: 40 Production Patterns — Everything from your first API call to production-ready agent architecture: 40 patterns with complete Python code, covering all the concepts introduced in this beginner guide and everything that comes after.
→ Get the Agent SDK Cookbook — $49
30-day money-back guarantee. Instant download.