Configuration File
GetProfile uses a JSON configuration file. Createconfig/getprofile.json:
Provider-Agnostic: GetProfile works with OpenAI, Anthropic, OpenRouter, or
any OpenAI-compatible API. Just change the
provider field and model name.Environment Variables
Minimalistic Approach: GetProfile only uses environment variables for
secrets and high-level server settings. All other configuration goes
in
config/getprofile.json.${VAR_NAME} syntax.
Required Secrets
| Variable | Description |
|---|---|
DATABASE_URL | PostgreSQL connection string |
LLM_API_KEY | API key for your LLM provider (works with OpenAI, Anthropic, OpenRouter, etc.) |
LLM_API_KEY):
OPENAI_API_KEY- OpenAI-specific keyANTHROPIC_API_KEY- Anthropic-specific key
Server Settings
| Variable | Default | Description |
|---|---|---|
PORT | 3100 | Proxy server port |
HOST | 0.0.0.0 | Proxy server host |
Optional Secrets
| Variable | Default | Description |
|---|---|---|
GETPROFILE_API_KEY | - | API key for proxy authentication |
Environment Variable Support: All configuration can be set via environment variables for backward compatibility and deployment flexibility. The config file (
config/getprofile.json) is the recommended approach for structured configuration, but environment variables take precedence when both are set.Environment variables that map to config file settings:UPSTREAM_API_KEY,UPSTREAM_BASE_URL,UPSTREAM_PROVIDER→upstreamsectionGETPROFILE_MAX_MESSAGES→memory.maxMessagesPerProfileGETPROFILE_SUMMARY_INTERVAL→memory.summarizationIntervalLLM_API_KEY,LLM_PROVIDER,LLM_MODEL,LLM_BASE_URL→llmsectionPORT,HOST→serversection
Configuration Sections
database
database
| Field | Type | Description |
|---|---|---|
url | string | PostgreSQL connection string |
poolSize | number | Connection pool size (default: 10) |
llm
llm
LLM used for internal processing (extraction, summarization).OpenAI Example:Anthropic Example:OpenRouter Example:
| Field | Type | Description |
|---|---|---|
provider | string | openai, anthropic, or custom |
apiKey | string | API key for the provider |
model | string | Model to use for extraction |
baseUrl | string | Custom API endpoint (for custom providers) |
upstream
upstream
LLM provider where chat completion requests are forwarded. Can be different from the LLM used for extraction.Same as LLM (default):Different provider:Per-request override via headers:
Clients can override the upstream provider using headers:
X-Upstream-Provider:openai,anthropic, orcustomX-Upstream-Key: API key for that providerX-Upstream-Base-URL: Custom base URL (optional)
memory
memory
| Field | Type | Description |
|---|---|---|
maxMessagesPerProfile | number | Soft limit triggering cleanup |
extractionEnabled | boolean | Enable memory extraction |
summarizationInterval | number | Minutes between summary updates |
retentionDays | number | Auto-delete old messages (null = disabled) |
traits
traits
| Field | Type | Description |
|---|---|---|
schemaPath | string | Path to trait schema JSON |
extractionEnabled | boolean | Enable trait extraction |
defaultTraitsEnabled | boolean | Include default traits |
allowRequestOverride | boolean | Allow per-request traits |
server
server