Search Results for

    ๐Ÿ“Š Analyze Command

    The analyze command estimates token usage and cost for a given prompt.


    ๐Ÿงฑ Usage

    promptstream analyze --template "<path or text>" [--var key=value ...] [--model <modelId>] [--output json|table]
    

    Options

    Option Description
    --template Inline text or path to a .txt or .json template. (Required)
    --var One or more variables in key=value format.
    --model Optional model ID (defaults to gpt-4o-mini).
    --output Output format: table (default) or json.

    ๐Ÿงช Examples

    Analyze a file template

    promptstream analyze --template "./examples/summary.txt" --var topic=AI --model gpt-4o-mini
    

    Output

    ๐Ÿ“Š Prompt Analysis
    โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
    Model: gpt-4o-mini
    Prompt Tokens:     82
    Completion Tokens: 0
    Total Tokens:      82
    Estimated Cost:    $0.000000
    โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
    

    Output as JSON

    promptstream analyze --template "./examples/summary.txt" --var topic=AI --output json
    

    Output

    {
      "PromptTokens": 82,
      "CompletionTokens": 0,
      "TotalTokens": 82,
      "EstimatedCost": 0.000000,
      "TimestampUtc": "2025-10-12T12:00:00Z"
    }
    

    ๐Ÿ’ก Notes

    • The analyze command uses shared models from Flow.AI.Core and the token estimation logic in PromptStream.AI.Services.PromptStreamService.
    • Ideal for estimating cost before sending a request to your model.
    • Use --output json to pipe structured data into other tools or workflows.
    • Edit this page
    In this article
    Back to top Generated by DocFX