Integration Patterns
Event-driven, periodic, and hybrid metering patterns for agents, with deterministic references and grouped submissions.
Agents usually emit in one of three shapes: one delta per discrete operation, one grouped submission per time interval, or a blend of both. This page covers when to choose each pattern, what the referenceId should look like, and how to keep submissions idempotent under retries.
Use When
Use this page when an agent is designing or reviewing its metering architecture, before writing any production emit loop.
Inputs
- The shape of the activity the agent needs to meter — discrete operations, continuous usage, or both.
- Expected throughput and tolerance for delayed visibility.
- Target operational cost for API calls and for the agent's own ingestion pipeline.
The Three Patterns
| Pattern | Use when | Primary call |
|---|---|---|
| Event-driven | Each operation is a countable event with a natural boundary. | pac.balance.emit(...) |
| Periodic | Usage accumulates continuously and is sampled at an interval. | pac.balance.emitBatch(...) |
| Hybrid | An agent meters both customer actions and infrastructure usage. | Both, plus pac.balance.checkpoint(...) at period close. |
Event-Driven Metering
Use for API calls, completed jobs, emails, transactions, or anything else with a clear "this happened once" boundary.
The core loop:
- Detect the operation.
- Execute the operation.
- On success, emit one delta with a deterministic
referenceIdand a matchingIdempotency-Key. - Record the returned
receiptIdandrecordIdfor downstream correlation.
import { PacSpace } from '@pacspace-io/sdk';
import { randomUUID } from 'crypto';
const pac = new PacSpace({ apiKey: process.env.PACSPACE_API_KEY });
function buildReferenceId(vendor: string, operation: string) {
return `${vendor}:${operation}:${Date.now()}:${randomUUID()}`;
}
export async function trackOperation<T>(
customerId: string,
vendor: string,
operation: string,
execute: () => Promise<T>,
): Promise<T> {
const started = Date.now();
const result = await execute();
const referenceId = buildReferenceId(vendor, operation);
await pac.balance.emit(customerId, -1, `${vendor}_${operation}`, {
referenceId,
idempotencyKey: referenceId,
metadata: {
vendor,
operation,
latencyMs: Date.now() - started,
emittedAt: new Date().toISOString(),
},
});
return result;
}
Emit After Success
Always emit after the underlying operation has succeeded. Emitting before the operation inflates usage, double-counts failed retries, and corrupts reconciliation later.
Fire-and-Forget vs Await
- Await the emit in critical paths where submission confirmation must precede the next step.
- Fire-and-forget in latency-sensitive user-request paths, queueing retries on failure.
void pac.balance
.emit(customerId, -1, reason, { referenceId, idempotencyKey: referenceId, metadata })
.catch((err) => {
logger.warn({ err, referenceId }, 'PacSpace emit failed');
});
Periodic Metering
Use for infrastructure or platform usage that accumulates continuously: compute minutes, storage, bandwidth, queue depth.
The core loop:
- Pick an interval (for example 1h, 6h, 24h).
- Collect usage for the interval from source systems.
- Convert the usage into one delta per cost center and group them.
- Submit the group in a single
emitBatchcall with an interval-scoped idempotency key. - Reconcile totals against source-of-bill data at period close.
type UsageSnapshot = {
customerId: string;
delta: number;
reason: string;
referenceId: string;
metadata: Record<string, unknown>;
};
async function buildUsageSnapshots(start: Date, end: Date): Promise<UsageSnapshot[]> {
const cpuMinutes = await readCpuMinutes(start, end);
const egressGb = await readEgressGb(start, end);
return [
{
customerId: 'infra-compute',
delta: -cpuMinutes,
reason: 'compute_cpu_minutes',
referenceId: `infra-compute:${end.toISOString()}`,
metadata: {
intervalStart: start.toISOString(),
intervalEnd: end.toISOString(),
cpuMinutes,
},
},
{
customerId: 'infra-networking',
delta: -egressGb,
reason: 'network_egress_gb',
referenceId: `infra-networking:${end.toISOString()}`,
metadata: {
intervalStart: start.toISOString(),
intervalEnd: end.toISOString(),
egressGb,
},
},
];
}
export async function runPeriodicSnapshot(intervalHours = 10) {
const end = new Date();
const start = new Date(end.getTime() - intervalHours * 60 * 60 * 1000);
const snapshots = await buildUsageSnapshots(start, end);
await pac.balance.emitBatch(snapshots, {
idempotencyKey: `periodic:${end.toISOString()}`,
});
}
Interval Selection
| Interval | Pros | Tradeoff |
|---|---|---|
| 1h | High freshness, fast anomaly detection. | Higher call volume and operational overhead. |
| 6–12h | Balanced cost and visibility. | Slightly delayed anomaly detection. |
| 24h | Lowest call volume. | Late issue detection, coarser reporting. |
Hybrid Metering
Most production agents use a hybrid model: event-driven for customer-visible actions, periodic for infrastructure usage, and a single checkpoint at period close.
await pac.balance.emit('cust_api', -1, 'api_request', {
referenceId: requestId,
idempotencyKey: requestId,
});
await pac.balance.emitBatch(periodicDeltas, {
idempotencyKey: `infra:${intervalEndISO}`,
});
await pac.balance.checkpoint('cust_api', { period: '2026-02' });
Deterministic referenceId Strategy
Every pattern above depends on a referenceId the agent can regenerate from its own inputs. Three rules:
- Build it from inputs that stay stable across retries — request IDs, interval boundaries, operation identifiers. Never use wall-clock time as the only input.
- Reuse the same
referenceIdas theIdempotency-Keyso the emit is safe to retry. - Shape the string so it is searchable in logs and reconciliation reports — a good default is
{scope}:{operation}:{time-bucket}:{uuid}.
Customer Record Design
The customerId namespace controls what the agent can reason about later:
- Per-counterparty records when the agent is metering billable activity for external customers.
- Per-service records when the agent is internally dogfooding its own infrastructure.
- Per-cost-category records (compute, storage, networking) when the agent wants a clean rollup for internal narratives.
Start with the smallest taxonomy that answers your current reporting questions. Expand only when a specific dispute, audit, or invoice requires it.
Checkpoint Cadence
Align checkpoints with billing periods. One checkpoint per close window is enough unless the agent's compliance model requires intra-period locks.
Idempotency
Every pattern on this page requires idempotency. Full rules — including request body fingerprints and the 409 behavior — live on Safety and Idempotency.
Retry
Retry only transient failures with bounded exponential backoff. Never retry validation failures — fix the inputs and submit again.
Failure Modes
- Emitting before the operation succeeds, producing inflated usage.
- Overlapping intervals in periodic metering, producing double-counted deltas.
- Gaps in periodic metering when a failed interval is skipped instead of retried with the same
referenceId. - Non-deterministic
referenceIdvalues that change on retry, producing duplicates. - Using event-driven for high-frequency telemetry when
emitBatchwould be correct.
Related Pages
- Emit — full emit request and response schema.
- Checkpoint — how to lock a period.
- Safety and Idempotency — retries, idempotency keys, cadence.
- Proofs for Agents — what the agent stores after each submission.