๐ Analyze Command
The analyze command estimates token usage and cost for a given prompt.
๐งฑ Usage
promptstream analyze --template "<path or text>" [--var key=value ...] [--model <modelId>] [--output json|table]
Options
| Option | Description |
|---|---|
--template |
Inline text or path to a .txt or .json template. (Required) |
--var |
One or more variables in key=value format. |
--model |
Optional model ID (defaults to gpt-4o-mini). |
--output |
Output format: table (default) or json. |
๐งช Examples
Analyze a file template
promptstream analyze --template "./examples/summary.txt" --var topic=AI --model gpt-4o-mini
Output
๐ Prompt Analysis
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Model: gpt-4o-mini
Prompt Tokens: 82
Completion Tokens: 0
Total Tokens: 82
Estimated Cost: $0.000000
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Output as JSON
promptstream analyze --template "./examples/summary.txt" --var topic=AI --output json
Output
{
"PromptTokens": 82,
"CompletionTokens": 0,
"TotalTokens": 82,
"EstimatedCost": 0.000000,
"TimestampUtc": "2025-10-12T12:00:00Z"
}
๐ก Notes
- The
analyzecommand uses shared models from Flow.AI.Core and the token estimation logic in PromptStream.AI.Services.PromptStreamService. - Ideal for estimating cost before sending a request to your model.
- Use
--output jsonto pipe structured data into other tools or workflows.