l1m (pronounced "el-one-em")

A Proxy to extract structured data from text and images using LLMs.

View on GitHub Join Waitlist →

Why l1m?

l1m is the easiest way to get structured data from unstructured text or images using LLMs. No prompt engineering, no chat history, just a simple API to extract structured json from text or images.

Features

Quick Start

Image Example

curl -X POST https://api.l1m.io/structured \
-H "Content-Type: application/json" \
-H "X-Provider-Url: demo" \
-H "X-Provider-Key: demo" \
-H "X-Provider-Model: demo" \
-d '{
  "input": "'$(curl -s https://public.l1m.io/menu.jpg | base64)'",
  "schema": {
    "type": "object",
    "properties": {
      "items": {
        "type": "array",
        "items": {
          "type": "object",
          "properties": {
            "name": { "type": "string" },
            "price": { "type": "number" }
          }
        }
      }
    }
  }
}'

↑ Copy and run this example in your terminal. The demo endpoints return pre-rendered LLM responses for quick testing.

Text Example

curl -X POST https://api.l1m.io/structured \
-H "Content-Type: application/json" \
-H "X-Provider-Url: demo" \
-H "X-Provider-Key: demo" \
-H "X-Provider-Model: demo" \
-d '{
  "input": "A particularly severe crisis in 1907 led Congress to enact the Federal Reserve Act in 1913",
  "schema": {
    "type": "object",
    "properties": {
      "items": {
        "type": "array",
        "items": {
          "type": "object",
          "properties": {
            "name": { "type": "string" },
            "price": { "type": "number" }
          }
        }
      }
    }
  }
}'

↑ Copy and run this example in your terminal. The demo endpoints return pre-rendered LLM responses for quick testing.

Recipes

Cache model response with a TTL
#!/bin/bash

# Run the same request multiple times with caching enabled
# A cache key is generated from the input, schema, provider key and model
# Cache key = hash(input + schema + x-provider-key + x-provider-model)
for i in {1..5}; do
  curl -X POST https://api.l1m.io/structured \
  -H "Content-Type: application/json" \
  -H "X-Provider-Url: $PROVIDER_URL" \
  -H "X-Provider-Key: $PROVIDER_KEY" \
  -H "X-Provider-Model: $PROVIDER_MODEL" \
  -H "X-Cache-TTL: 300" \
  -d '{
    "input": "The weather in San Francisco is sunny and 72°F",
    "schema": {
      "type": "object",
      "properties": {
        "temperature": { "type": "number" },
        "conditions": { "type": "string" }
      }
    }
  }'
  echo "\n--- Request $i completed ---\n"
done

# Only 1 LLM call will be made, subsequent calls will be served from cache
# for the duration specified in X-Cache-TTL (300 seconds in this example)
Tool calling / Routing

While l1m does not support "tool calling" in the same sense as other model providers, you can achieve similar results using the enum property.

INPUT="Please find the user [email protected] and get their details"

# First call to determine which tool to use
TOOL_RESPONSE=$(curl -X POST https://api.l1m.io/structured \
-H "Content-Type: application/json" \
-H "X-Provider-Url: $PROVIDER_URL" \
-H "X-Provider-Key: $PROVIDER_KEY" \
-H "X-Provider-Model: $PROVIDER_MODEL" \
-d '{
  "input": "$INPUT",
  "instruction": "Select the most appropriate tool based on the input",
  "schema": {
    "type": "object",
    "properties": {
      "selected_tool": {
        "type": "string",
        "enum": ["getUserByEmail", "getUserByName", "getUserById"]
      }
    }
  }
}')

# Extract the selected tool from the response
SELECTED_TOOL=$(echo $TOOL_RESPONSE | jq -r '.selected_tool')

# Switch case to handle different tool types and extract appropriate arguments
case $SELECTED_TOOL in
  "getUserByEmail")
    # Make a follow up call to extract additional argument
    ARGS=$(curl -X POST https://api.l1m.io/structured \
    -H "Content-Type: application/json" \
    -H "X-Provider-Url: $PROVIDER_URL" \
    -H "X-Provider-Key: $PROVIDER_KEY" \
    -H "X-Provider-Model: $PROVIDER_MODEL" \
    -d '{
      "input": "$INPUT",
      "schema": {
        "type": "object",
        "properties": {
          "email": {
            "type": "string",
            "format": "email",
            "description": "Extract the email address to use as argument"
          }
        }
      }
    }')
    EMAIL=$(echo $ARGS | jq -r '.email')
    echo "Calling getUserByEmail with email: $EMAIL"
    ;;

  "getUserByName")
    # ...
  "getUserById")
    # ...
  *)
    echo "Error: Unknown tool selected: $SELECTED_TOOL"
    exit 1
    ;;
esac
Using local models with Ollama
#!/bin/bash

# Make sure Ollama is running locally with your desired model
# Example: ollama run llama3

# Set up ngrok to expose your local Ollama server
# ngrok http 11434 --host-header="localhost:11434"

# Replace the ngrok URL with your actual ngrok URL or use localhost
# if you're running this on the same machine as Ollama
curl -X POST https://api.l1m.io/structured \
-H "Content-Type: application/json" \
-H "X-Provider-Url: https://your-ngrok-url.ngrok-free.app/v1" \
-H "X-Provider-Key: ollama" \
-H "X-Provider-Model: llama3:latest" \
-d '{
  "input": "A particularly severe crisis in 1907 led Congress to enact the Federal Reserve Act in 1913",
  "schema": {
    "type": "object",
    "properties": {
      "year": {
        "type": "number",
        "description": "The year of the federal reserve act"
      }
    }
  }
}'

Documentation

API

Headers

Body

Response and Error handling

Supported Data Types

For images, base64 encoded data can be in one of the following formats:

SDKs

l1m provides official SDKs for multiple programming languages to help you integrate structured data extraction into your applications:

Managed API Pricing

Stay Updated

Join our waitlist to get early access to the production release of our hosted version.

Join our waitlist →