SDK Usage
TypeScript SDK wrapper for the Lit Status API
TypeScript SDK
The Lit Status SDK provides a type-safe TypeScript wrapper around the REST API, making it easier to integrate monitoring into your applications without dealing with HTTP requests directly.
Prerequisites
Before using the SDK, ensure you have:
- Lit Status Server: The backend server should be set up and running (see Quick Start)
- API Key: Obtain an API key from your server administrator (see Authentication)
- Server URL: Know the URL where your Lit Status server is running
Installation
bun add @lit-protocol/lit-status-sdkClient Creation
import { createLitStatusClient } from '@lit-protocol/lit-status-sdk';
const client = createLitStatusClient({
url: 'http://localhost:3000',
apiKey: 'your-api-key',
// Optional: custom fetch implementation
fetch: customFetch
});Function Management
Register Functions
// Register a single function
const func = await client.createOrUpdateFunction({
name: 'sendTransaction',
network: 'mainnet',
product: 'lit-node',
description: 'Send blockchain transaction',
isActive: true
});
// Batch register multiple functions
const functions = await client.getOrRegisterFunctions({
network: 'testnet',
product: 'my-app',
functions: ['checkBalance', 'sendTx', 'validateSig']
});
// Access by name
const checkBalanceFunc = functions.checkBalance;Retrieve Functions
// Get specific function
const func = await client.getFunction('sendTx', 'mainnet', 'lit-node');
// Get all functions
const allFunctions = await client.getAllFunctions(false); // exclude inactiveExecution Logging
Manual Logging
// Log successful execution
await client.logExecution(functionId, {
isSuccess: true,
responseTimeMs: 150
});
// Log failed execution
await client.logExecution(functionId, {
isSuccess: false,
errorMessage: 'Transaction failed: insufficient funds',
responseTimeMs: 300
});Automatic Execution & Logging
The SDK can automatically time and log your function executions:
const { result, log } = await client.executeAndLog(
functionId,
async () => {
// Your async function logic
const data = await fetchDataFromAPI();
if (!data) throw new Error('No data received');
return processData(data);
}
);
// result contains the return value (if successful)
// log contains the execution log entry
console.log(`Execution took ${log.responseTimeMs}ms`);Metrics & Analytics
Function Metrics
// Get metrics for a specific function
const metrics = await client.getFunctionMetrics(functionId);
console.log(`Uptime: ${metrics.uptime}%`);
console.log(`Average response time: ${metrics.averageResponseTime}ms`);
console.log(`Total executions: ${metrics.totalExecutions}`);Time-Range Filtering
// Get metrics for last 24 hours
const recentMetrics = await client.getFunctionMetrics(functionId, {
startDate: new Date(Date.now() - 24 * 60 * 60 * 1000),
endDate: new Date()
});
// Get all functions metrics
const allMetrics = await client.getAllMetrics({
startDate: new Date('2024-01-01'),
endDate: new Date('2024-01-31')
});Time-Series Data
Perfect for charting and dashboards:
// Get hourly metrics for the last 24 hours
const timeSeries = await client.getFunctionMetricsTimeSeries(
functionId,
{ startDate: new Date(Date.now() - 24 * 60 * 60 * 1000) },
'hour'
);
// Chart the data
timeSeries.buckets.forEach(bucket => {
console.log(
`${bucket.timestamp}: ${bucket.successRate}% success, ` +
`${bucket.averageResponseTime}ms avg`
);
});Metrics Export
The SDK provides convenient methods to export metrics in both Prometheus and JSON formats, eliminating the need for direct HTTP calls.
Prometheus Format Export
// Export all metrics in Prometheus format
const prometheusMetrics = await client.exportMetricsPrometheus();
console.log(prometheusMetrics);
// Output:
// # HELP lit_status_function_total_executions Total number of function executions
// # TYPE lit_status_function_total_executions counter
// lit_status_function_total_executions{function="sendTransaction",network="mainnet",product="lit-node"} 1250
//
// # HELP lit_status_function_uptime Function uptime percentage
// # TYPE lit_status_function_uptime gauge
// lit_status_function_uptime{function="sendTransaction",network="mainnet",product="lit-node"} 96.0Filtered Prometheus Export
// Export metrics for specific network and product
const filteredMetrics = await client.exportMetricsPrometheus({
network: 'mainnet',
product: 'lit-node'
});
// Export with time range
const recentPrometheusMetrics = await client.exportMetricsPrometheus({
startDate: new Date(Date.now() - 7 * 24 * 60 * 60 * 1000), // Last week
endDate: new Date()
});
// Export specific function metrics
const functionMetrics = await client.exportMetricsPrometheus({
function: 'sendTransaction',
network: 'mainnet'
});JSON Format Export
// Export all metrics in structured JSON format
const jsonMetrics = await client.exportMetricsJSON();
console.log(`Exported ${jsonMetrics.metadata.totalFunctions} functions`);
console.log(`Export time: ${jsonMetrics.metadata.exportTime}`);
// Process each function's metrics
jsonMetrics.metrics.forEach(item => {
console.log(`Function: ${item.function.name}`);
console.log(` Network: ${item.function.network}`);
console.log(` Product: ${item.function.product}`);
console.log(` Uptime: ${item.metrics.uptime}%`);
console.log(` Total executions: ${item.metrics.totalExecutions}`);
});Filtered JSON Export
// Export with comprehensive filters
const filteredJsonMetrics = await client.exportMetricsJSON({
network: 'testnet',
product: 'my-app',
includeInactive: true,
startDate: new Date('2024-01-01'),
endDate: new Date('2024-01-31')
});
// Export for monitoring dashboard
const dashboardData = await client.exportMetricsJSON({
network: 'mainnet'
});
// Convert to simple dashboard format
const dashboard = dashboardData.metrics.map(item => ({
name: `${item.function.product}/${item.function.name}`,
network: item.function.network,
uptime: item.metrics.uptime,
avgResponseTime: item.metrics.averageResponseTime,
lastSeen: item.metrics.lastExecutionTime
}));Get Available Filters
// Get all available filter options
const filterOptions = await client.getFilterOptions();
console.log('Available networks:', filterOptions.networks);
console.log('Available products:', filterOptions.products);
console.log('Available functions:', filterOptions.functions);
console.log(`Total: ${filterOptions.totalFunctions} (${filterOptions.activeFunctions} active)`);
// Use filters dynamically
for (const network of filterOptions.networks) {
const networkMetrics = await client.exportMetricsJSON({
network: network
});
console.log(`${network} has ${networkMetrics.metrics.length} functions`);
}Integration with External Systems
// Send metrics to Prometheus endpoint
async function sendToPrometheus(): Promise<void> {
const metrics = await client.exportMetricsPrometheus();
await fetch('http://your-prometheus-gateway:9091/metrics/job/lit-status', {
method: 'POST',
headers: { 'Content-Type': 'text/plain' },
body: metrics
});
console.log('✅ Metrics sent to Prometheus');
}
// Send to monitoring service
async function sendToMonitoringService(): Promise<void> {
const metrics = await client.exportMetricsJSON({
network: 'mainnet'
});
await fetch('https://your-monitoring-api.com/metrics', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(metrics)
});
console.log('✅ Metrics sent to monitoring service');
}
// Save metrics to file
async function saveMetricsToFile(): Promise<void> {
const metrics = await client.exportMetricsJSON();
const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
// In Node.js environment
const fs = require('fs');
fs.writeFileSync(
`metrics-${timestamp}.json`,
JSON.stringify(metrics, null, 2)
);
console.log(`✅ Metrics saved to metrics-${timestamp}.json`);
}Error Handling
import { LitStatusError } from '@lit-protocol/lit-status-sdk';
try {
await client.logExecution(functionId, executionData);
} catch (error) {
if (error instanceof LitStatusError) {
console.error(`API Error (${error.statusCode}): ${error.message}`);
console.error('Response:', error.response);
} else {
console.error('Network or other error:', error);
}
}Best Practices
Batch Operations
// Initialise all your functions at startup
const functions = await client.getOrRegisterFunctions({
network: process.env.NETWORK || 'testnet',
product: 'my-service',
functions: [
'authenticate',
'fetchData',
'processPayment',
'sendNotification'
]
});
// Store function IDs for later use
const functionIds = {
auth: functions.authenticate.id,
data: functions.fetchData.id,
payment: functions.processPayment.id,
notify: functions.sendNotification.id
};Observability & Monitoring
The SDK provides comprehensive monitoring capabilities without requiring direct API calls.
Health Monitoring
// Check server health and authentication status
const health = await client.healthCheck();
console.log('Server status:', health.status);
console.log('Database connected:', health.connected);
console.log('API keys configured:', health.authentication);
// Regular health checks
setInterval(async () => {
try {
const health = await client.healthCheck();
if (health.status !== 'ok') {
console.error('❌ Server unhealthy:', health);
} else {
console.log('✅ Server healthy');
}
} catch (error) {
console.error('Health check failed:', error);
}
}, 30000); // Every 30 secondsFunction Metrics
// Get comprehensive metrics for a specific function
const metrics = await client.getFunctionMetrics(functionId);
console.log(`Function: ${metrics.functionId}`);
console.log(`Uptime: ${metrics.uptime}%`);
console.log(`Total executions: ${metrics.totalExecutions}`);
console.log(`Success rate: ${(metrics.successfulExecutions / metrics.totalExecutions * 100).toFixed(2)}%`);
console.log(`Average response time: ${metrics.averageResponseTime}ms`);
// Get metrics with time range filtering
const weeklyMetrics = await client.getFunctionMetrics(functionId, {
startDate: new Date(Date.now() - 7 * 24 * 60 * 60 * 1000), // Last week
endDate: new Date()
});All Functions Overview
// Get metrics for all functions
const allMetrics = await client.getAllMetrics();
// Generate monitoring dashboard data
const dashboardData = allMetrics.map(metric => ({
functionId: metric.functionId,
uptime: metric.uptime,
totalExecutions: metric.totalExecutions,
avgResponseTime: metric.averageResponseTime,
lastSeen: metric.lastExecutionTime
}));
// Find functions with issues
const unhealthyFunctions = allMetrics.filter(metric =>
metric.uptime < 95 || metric.averageResponseTime > 1000
);
if (unhealthyFunctions.length > 0) {
console.warn('⚠️ Functions requiring attention:', unhealthyFunctions);
}Time-Series Analysis
// Get time-series data for charting and trend analysis
const timeSeries = await client.getFunctionMetricsTimeSeries(
functionId,
{
startDate: new Date(Date.now() - 24 * 60 * 60 * 1000), // Last 24 hours
endDate: new Date()
},
'hour' // Hourly buckets
);
// Analyse trends
console.log(`Data points: ${timeSeries.buckets.length}`);
console.log(`Granularity: ${timeSeries.granularity}`);
// Create simple chart data
const chartData = timeSeries.buckets.map(bucket => ({
time: bucket.timestamp,
successRate: bucket.successRate,
avgResponseTime: bucket.averageResponseTime,
executions: bucket.totalExecutions
}));
// Detect performance degradation
const recentBuckets = timeSeries.buckets.slice(-6); // Last 6 hours
const avgRecentResponseTime = recentBuckets.reduce((sum, bucket) =>
sum + (bucket.averageResponseTime || 0), 0) / recentBuckets.length;
if (avgRecentResponseTime > timeSeries.summary.averageResponseTime * 1.5) {
console.warn('🐌 Performance degradation detected!');
}Automated Monitoring Scripts
// Comprehensive monitoring class
class LitStatusMonitor {
constructor(private client: LitStatusClient) {}
async runHealthCheck(): Promise<boolean> {
try {
const health = await this.client.healthCheck();
return health.status === 'ok' && health.connected;
} catch (error) {
console.error('Health check failed:', error);
return false;
}
}
async generateReport(): Promise<void> {
const allMetrics = await this.client.getAllMetrics();
console.log('\n📊 Lit Status Monitoring Report');
console.log('================================');
for (const metric of allMetrics) {
const status = metric.uptime >= 99 ? '🟢' :
metric.uptime >= 95 ? '🟡' : '🔴';
console.log(`${status} Function: ${metric.functionId}`);
console.log(` Uptime: ${metric.uptime.toFixed(2)}%`);
console.log(` Executions: ${metric.totalExecutions}`);
console.log(` Avg Response: ${metric.averageResponseTime?.toFixed(0) || 'N/A'}ms`);
console.log('');
}
}
async alertOnIssues(): Promise<void> {
const allMetrics = await this.client.getAllMetrics();
for (const metric of allMetrics) {
// Alert on low uptime
if (metric.uptime < 95) {
console.error(`🚨 ALERT: Function ${metric.functionId} has ${metric.uptime.toFixed(2)}% uptime`);
}
// Alert on high response times
if (metric.averageResponseTime && metric.averageResponseTime > 1000) {
console.error(`🐌 ALERT: Function ${metric.functionId} avg response time: ${metric.averageResponseTime}ms`);
}
// Alert on no recent activity
const lastExecution = new Date(metric.lastExecutionTime || 0);
const hoursSinceLastExecution = (Date.now() - lastExecution.getTime()) / (1000 * 60 * 60);
if (hoursSinceLastExecution > 24) {
console.error(`⏰ ALERT: Function ${metric.functionId} hasn't executed in ${hoursSinceLastExecution.toFixed(1)} hours`);
}
}
}
async exportMetricsToFile(): Promise<void> {
const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
// Export Prometheus format
const prometheusMetrics = await this.client.exportMetricsPrometheus();
require('fs').writeFileSync(`metrics-prometheus-${timestamp}.txt`, prometheusMetrics);
// Export JSON format
const jsonMetrics = await this.client.exportMetricsJSON();
require('fs').writeFileSync(`metrics-json-${timestamp}.json`, JSON.stringify(jsonMetrics, null, 2));
console.log(`📁 Metrics exported to files with timestamp ${timestamp}`);
}
}
// Usage
const monitor = new LitStatusMonitor(client);
// Run monitoring checks
await monitor.runHealthCheck();
await monitor.generateReport();
await monitor.alertOnIssues();
await monitor.exportMetricsToFile();Scheduled Monitoring
// Set up automated monitoring with different intervals
class ScheduledMonitoring {
constructor(private client: LitStatusClient) {}
start(): void {
// Health checks every 30 seconds
setInterval(() => this.quickHealthCheck(), 30000);
// Metrics analysis every 5 minutes
setInterval(() => this.analyseMetrics(), 5 * 60000);
// Full report every hour
setInterval(() => this.generateHourlyReport(), 60 * 60000);
// Export metrics every 6 hours
setInterval(() => this.exportMetrics(), 6 * 60 * 60000);
}
private async quickHealthCheck(): Promise<void> {
const isHealthy = await new LitStatusMonitor(this.client).runHealthCheck();
if (!isHealthy) {
console.error('🚨 Health check failed!');
}
}
private async analyseMetrics(): Promise<void> {
await new LitStatusMonitor(this.client).alertOnIssues();
}
private async generateHourlyReport(): Promise<void> {
await new LitStatusMonitor(this.client).generateReport();
}
private async exportMetrics(): Promise<void> {
await new LitStatusMonitor(this.client).exportMetricsToFile();
}
}
// Start scheduled monitoring
const scheduler = new ScheduledMonitoring(client);
scheduler.start();OpenTelemetry Integration
The Lit Status SDK includes optional OpenTelemetry integration that provides comprehensive observability for your function executions through distributed tracing, metrics, and structured logging.
Overview
When enabled, the OpenTelemetry integration automatically instruments your executeAndLog calls with:
- 🔍 Distributed Tracing: Track execution flows across your application
- 📊 Metrics: Monitor success rates, execution times, and error counts
- 📝 Structured Logging: Rich contextual logs with function metadata
Prerequisites
The OpenTelemetry integration uses optional dependencies. Install them only if you want telemetry features:
# Optional: Install OpenTelemetry packages for enhanced observability
npm install @opentelemetry/api @opentelemetry/api-logs @opentelemetry/resources @opentelemetry/semantic-conventions @opentelemetry/sdk-metrics @opentelemetry/exporter-metrics-otlp-http @opentelemetry/sdk-logs @opentelemetry/exporter-logs-otlp-http @opentelemetry/sdk-trace-node @opentelemetry/exporter-trace-otlp-http @opentelemetry/sdk-trace-baseNote: The SDK works perfectly without OpenTelemetry packages installed. If they're not available, the telemetry features will be automatically disabled and the SDK will continue to function normally.
Basic Configuration
Enable OpenTelemetry by adding configuration to your client:
import { createLitStatusClient } from '@lit-protocol/lit-status-sdk';
const client = createLitStatusClient({
url: 'http://localhost:3000',
apiKey: 'your-api-key',
openTelemetry: {
enabled: true,
serviceName: 'my-application',
serviceVersion: '1.0.0',
otlpEndpoint: 'http://localhost:4318', // Optional
exportToConsole: true, // Set to false when using OTLP collector
}
});Configuration Options
| Option | Type | Default | Description |
|---|---|---|---|
enabled | boolean | true | Enable/disable OpenTelemetry integration |
serviceName | string | 'lit-status-sdk' | Service name for telemetry data |
serviceVersion | string | '1.0.0' | Service version for telemetry data |
otlpEndpoint | string | undefined | OTLP collector endpoint (e.g., http://localhost:4318) |
exportToConsole | boolean | true | Export to console when no OTLP endpoint available |
exportIntervalMs | number | 10000 | Metrics export interval in milliseconds |
Automatic Instrumentation
Once configured, your existing executeAndLog calls automatically include telemetry:
// This call now automatically includes tracing, metrics, and logging
const { result, log } = await client.executeAndLog(functionId, async () => {
// Your function logic here
const data = await processBlockchainData();
return data;
});Generated Telemetry Data
Metrics
The integration automatically creates these metrics:
-
lit_function_executions_total(Counter): Total executions with labels:function_name: Function identifiernetwork: Network contextproduct: Product contextstatus:successorerror
-
lit_function_execution_duration_ms(Histogram): Execution duration with same labels -
lit_function_errors_total(Counter): Error count with additional label:error_type: Error class name
Traces
Each executeAndLog call creates a trace span with:
- Span name:
execute_{functionName} - Attributes:
lit.function.id: Function IDlit.function.name: Function namelit.network: Network contextlit.product: Product contextlit.duration_ms: Execution duration
- Status: Success/error with exception details
Logs
Structured logs are emitted for each execution:
- Level: INFO (success) or ERROR (failure)
- Message: Execution summary with duration
- Attributes: All span attributes plus:
lit.success: Boolean success flaglit.error_message: Error message (if failed)
Development Setup (Console Export)
For development, use console export to see telemetry data in your terminal:
const client = createLitStatusClient({
url: 'http://localhost:3000',
apiKey: 'dev-key',
openTelemetry: {
enabled: true,
exportToConsole: true, // See telemetry in console
}
});Production Setup (OTLP Collector)
For production, use an OTLP collector to send data to your observability platform:
const client = createLitStatusClient({
url: 'https://api.lit.example.com',
apiKey: process.env.LIT_API_KEY,
openTelemetry: {
enabled: true,
serviceName: 'my-production-app',
serviceVersion: process.env.APP_VERSION,
otlpEndpoint: process.env.OTEL_EXPORTER_OTLP_ENDPOINT,
exportToConsole: false, // Send to collector only
}
});Docker Collector Setup
Run an OpenTelemetry Collector using Docker:
# Run OpenTelemetry Collector
docker run -p 4317:4317 -p 4318:4318 \
-v $(pwd)/collector-config.yml:/etc/otel-collector-config.yml \
otel/opentelemetry-collector \
--config=/etc/otel-collector-config.ymlExample collector-config.yml:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
exporters:
debug:
verbosity: detailed
# Add your preferred exporters (Jaeger, Prometheus, etc.)
service:
pipelines:
traces:
receivers: [otlp]
exporters: [debug]
metrics:
receivers: [otlp]
exporters: [debug]
logs:
receivers: [otlp]
exporters: [debug]Environment Variables
You can also configure OpenTelemetry using environment variables:
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
export OTEL_SERVICE_NAME=my-lit-app
export OTEL_SERVICE_VERSION=1.0.0Complete Example
import { createLitStatusClient } from '@lit-protocol/lit-status-sdk';
async function main() {
const client = createLitStatusClient({
url: 'http://localhost:3000',
apiKey: 'your-api-key',
openTelemetry: {
enabled: true,
serviceName: 'blockchain-processor',
serviceVersion: '2.1.0',
otlpEndpoint: 'http://localhost:4318',
exportToConsole: false,
}
});
// Register a function
const func = await client.createOrUpdateFunction({
name: 'processTransaction',
network: 'ethereum',
product: 'dapp-backend',
description: 'Process blockchain transactions',
});
// Execute with automatic telemetry
try {
const { result, log } = await client.executeAndLog(
func.id,
async () => {
// Your business logic
const txData = await fetchTransactionData();
const processed = await processTransaction(txData);
return processed;
}
);
console.log('Transaction processed:', result);
// Telemetry data automatically captured:
// - Trace span with timing and success status
// - Metrics incremented for successful execution
// - Structured log with execution context
} catch (error) {
console.error('Transaction failed:', error);
// Error telemetry automatically captured:
// - Trace span marked as error with exception details
// - Error metrics incremented
// - Error log with failure context
}
}
main().catch(console.error);Benefits
Enhanced Observability
- Complete execution visibility: See exactly what's happening in your functions
- Performance monitoring: Track execution times and identify bottlenecks
- Error tracking: Automatic error collection with full context
Production Ready
- Graceful degradation: SDK works normally if OpenTelemetry fails to initialize
- Minimal overhead: Efficient instrumentation with configurable export intervals
- Standard protocols: Uses industry-standard OTLP for maximum compatibility
Developer Experience
- Zero code changes: Existing
executeAndLogcalls automatically get telemetry - Rich context: Function metadata automatically included in all telemetry
- Flexible deployment: Works with console export for dev, OTLP for production
Troubleshooting
Telemetry not appearing
- Check that
enabled: truein your config - Verify OTLP endpoint is reachable (or use
exportToConsole: true) - Ensure required OpenTelemetry packages are installed
Performance concerns
- Increase
exportIntervalMsto reduce export frequency - Use OTLP collector instead of console export in production
- Consider disabling telemetry for high-frequency functions if needed
Missing context
- Function metadata is auto-detected but may show as "unknown"
- Consider enhancing the SDK to pass richer context to
executeAndLog