Overview
The Statsig Node AI SDK lets you manage your prompts, online and offline evals, and debug your LLM applications in production. It depends upon the Statsig Node Server SDK, but provides convenient hooks for AI-specific functionality.1
Install the SDK
2
Initialize the SDK
If you already have a Statsig instance, you can pass it into the SDK. Otherwise, we’ll create an instance for you internally.
- Don't use Statsig
- Already have Statsig instance
Initialize the AI SDK with a Server Secret Key from the Statsig console.
Initializing With Options
Initializing With Options
Optionally, you can configure StatsigOptions for your Statsig instance:
Using the SDK
Getting a Prompt
Statsig can act as the control plane for your LLM prompts, allowing you to version and change them without deploying code. For more information, see the Prompts documentation.Logging Eval Results
When running an online eval, you can log results back to Statsig for analysis. Provide a score between 0 and 1, along with the grader name and any useful metadata (e.g., session IDs). Currently, you must provide the grader manually — future releases will support automated grading options.OpenTelemetry (OTEL)
The AI SDK works with OpenTelemetry for sending telemetry to Statsig. You can enable OTel tracing by calling theinitializeTracing function.
You can also provide a custom TracerProvider to the initializeTracing function if you want to customize the tracing behavior.
More advanced OTel configuration and exporter support are on the way.
The simplest way to start tracing with Statsig and OTel is to call initializeTracing() at the root of your application.
NodeSDK, you only need to initialize Statsig’s OTel tracing and use the processor created by initializeTracing().
initializeOTel function accepts the below options for setting up tracing with OTel.