Skip to main content

Configuration File

GetProfile uses a JSON configuration file. Create config/getprofile.json:
{
  "database": {
    "url": "${DATABASE_URL}",
    "poolSize": 10
  },

  "llm": {
    "provider": "openai",
    "apiKey": "${LLM_API_KEY}",
    "model": "gpt-5-mini"
  },

  "upstream": {
    "provider": "openai",
    "apiKey": "${LLM_API_KEY}"
  },

  "memory": {
    "maxMessagesPerProfile": 1000,
    "extractionEnabled": true,
    "summarizationInterval": 60
  },

  "traits": {
    "schemaPath": "./config/traits/default.traits.json",
    "extractionEnabled": true,
    "defaultTraitsEnabled": true,
    "allowRequestOverride": true
  },

  "server": {
    "port": 3100,
    "host": "0.0.0.0"
  }
}
Provider-Agnostic: GetProfile works with OpenAI, Anthropic, OpenRouter, or any OpenAI-compatible API. Just change the provider field and model name.

Environment Variables

Minimalistic Approach: GetProfile only uses environment variables for secrets and high-level server settings. All other configuration goes in config/getprofile.json.
Configuration values can reference environment variables using ${VAR_NAME} syntax.

Required Secrets

VariableDescription
DATABASE_URLPostgreSQL connection string
LLM_API_KEYAPI key for your LLM provider (works with OpenAI, Anthropic, OpenRouter, etc.)
Provider-specific keys (optional, fallback to LLM_API_KEY):
  • OPENAI_API_KEY - OpenAI-specific key
  • ANTHROPIC_API_KEY - Anthropic-specific key

Server Settings

VariableDefaultDescription
PORT3100Proxy server port
HOST0.0.0.0Proxy server host

Optional Secrets

VariableDefaultDescription
GETPROFILE_API_KEY-API key for proxy authentication
Environment Variable Support: All configuration can be set via environment variables for backward compatibility and deployment flexibility. The config file (config/getprofile.json) is the recommended approach for structured configuration, but environment variables take precedence when both are set.Environment variables that map to config file settings:
  • UPSTREAM_API_KEY, UPSTREAM_BASE_URL, UPSTREAM_PROVIDERupstream section
  • GETPROFILE_MAX_MESSAGESmemory.maxMessagesPerProfile
  • GETPROFILE_SUMMARY_INTERVALmemory.summarizationInterval
  • LLM_API_KEY, LLM_PROVIDER, LLM_MODEL, LLM_BASE_URLllm section
  • PORT, HOSTserver section
Priority order: Environment variables > Config file > Defaults

Configuration Sections

{
  "database": {
    "url": "postgresql://user:pass@localhost:5432/getprofile",
    "poolSize": 10
  }
}
FieldTypeDescription
urlstringPostgreSQL connection string
poolSizenumberConnection pool size (default: 10)
LLM used for internal processing (extraction, summarization).OpenAI Example:
{
  "llm": {
    "provider": "openai",
    "apiKey": "${LLM_API_KEY}",
    "model": "gpt-5-mini"
  }
}
Anthropic Example:
{
  "llm": {
    "provider": "anthropic",
    "apiKey": "${ANTHROPIC_API_KEY}",
    "model": "claude-4-5-sonnet"
  }
}
OpenRouter Example:
{
  "llm": {
    "provider": "custom",
    "apiKey": "${LLM_API_KEY}",
    "baseUrl": "https://openrouter.ai/api/v1",
    "model": "anthropic/claude-4.5-sonnet"
  }
}
FieldTypeDescription
providerstringopenai, anthropic, or custom
apiKeystringAPI key for the provider
modelstringModel to use for extraction
baseUrlstringCustom API endpoint (for custom providers)
LLM provider where chat completion requests are forwarded. Can be different from the LLM used for extraction.Same as LLM (default):
{
  "upstream": {
    "provider": "openai",
    "apiKey": "${LLM_API_KEY}"
  }
}
Different provider:
{
  "upstream": {
    "provider": "anthropic",
    "apiKey": "${ANTHROPIC_API_KEY}"
  }
}
Per-request override via headers: Clients can override the upstream provider using headers:
  • X-Upstream-Provider: openai, anthropic, or custom
  • X-Upstream-Key: API key for that provider
  • X-Upstream-Base-URL: Custom base URL (optional)
{
  "memory": {
    "maxMessagesPerProfile": 1000,
    "extractionEnabled": true,
    "summarizationInterval": 60,
    "retentionDays": null
  }
}
FieldTypeDescription
maxMessagesPerProfilenumberSoft limit triggering cleanup
extractionEnabledbooleanEnable memory extraction
summarizationIntervalnumberMinutes between summary updates
retentionDaysnumberAuto-delete old messages (null = disabled)
{
  "traits": {
    "schemaPath": "./config/traits/default.traits.json",
    "extractionEnabled": true,
    "defaultTraitsEnabled": true,
    "allowRequestOverride": true
  }
}
FieldTypeDescription
schemaPathstringPath to trait schema JSON
extractionEnabledbooleanEnable trait extraction
defaultTraitsEnabledbooleanInclude default traits
allowRequestOverridebooleanAllow per-request traits
{
  "server": {
    "port": 3100,
    "host": "0.0.0.0"
  }
}