Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 

README.md

Quickstart

Prerequisites

  • Docker (with Compose)
  • curl
  • jq

Clone the repository:

git clone git@github.com:openmeterio/openmeter.git cd openmeter/quickstart

1. Launch OpenMeter

Launch OpenMeter and its dependencies via:

docker compose up -d

2. Ingest usage event(s)

Ingest usage events in CloudEvents format:

curl -X POST http://localhost:48888/api/v1/events \ -H 'Content-Type: application/cloudevents+json' \ --data-raw ' {  "specversion" : "1.0",  "type": "request",  "id": "00001",  "time": "2023-01-01T00:00:00.001Z",  "source": "service-0",  "subject": "customer-1",  "data": {  "method": "GET",  "route": "/hello",  "duration_ms": 10  } } '

Note how ID is different:

curl -X POST http://localhost:48888/api/v1/events \ -H 'Content-Type: application/cloudevents+json' \ --data-raw ' {  "specversion" : "1.0",  "type": "request",  "id": "00002",  "time": "2023-01-01T00:00:00.001Z",  "source": "service-0",  "subject": "customer-1",  "data": {  "method": "GET",  "route": "/hello",  "duration_ms": 20  } } '

Note how ID and time are different:

curl -X POST http://localhost:48888/api/v1/events \ -H 'Content-Type: application/cloudevents+json' \ --data-raw ' {  "specversion" : "1.0",  "type": "request",  "id": "00003",  "time": "2023-01-02T00:00:00.001Z",  "source": "service-0",  "subject": "customer-1",  "data": {  "method": "GET",  "route": "/hello",  "duration_ms": 30  } } '

3. Query Usage

Query the usage hourly:

curl 'http://localhost:48888/api/v1/meters/api_requests_total/query?windowSize=HOUR&groupBy=method&groupBy=route' | jq
{ "windowSize": "HOUR", "data": [ { "value": 2, "windowStart": "2023-01-01T00:00:00Z", "windowEnd": "2023-01-01T01:00:00Z", "subject": null, "groupBy": { "method": "GET", "route": "/hello" } }, { "value": 1, "windowStart": "2023-01-02T00:00:00Z", "windowEnd": "2023-01-02T01:00:00Z", "subject": null, "groupBy": { "method": "GET", "route": "/hello" } } ] }

Query the total usage for customer-1:

curl 'http://localhost:48888/api/v1/meters/api_requests_total/query?subject=customer-1' | jq
{ "data": [ { "value": 3, "windowStart": "2023-01-01T00:00:00Z", "windowEnd": "2023-01-02T00:01:00Z", "subject": "customer-1", "groupBy": {} } ] }

4. Configure additional meter(s) (optional)

In this example we will meter LLM token usage, groupped by AI model and prompt type. You can think about it how OpenAI charges by tokens for ChatGPT.

Configure how OpenMeter should process your usage events in this new tokens_total meter.

# ... meters: # Sample meter to count LLM Token Usage - slug: tokens_total description: AI Token Usage eventType: prompt # Filter events by type aggregation: SUM valueProperty: $.tokens # JSONPath to parse usage value groupBy: model: $.model # AI model used: gpt4-turbo, etc. type: $.type # Prompt type: input, output, system 

Cleanup

Once you are done, stop any running instances:

docker compose down -v