GraphQL Channel
Ask for exactly the fields you need. The GraphQL channel generates its schema from your entry type configuration, so the API always matches your directory.
Quick start
Section titled “Quick start”Enable the channel:
SCRIBE_GRAPHQL_ENABLED=true| URL | What you get |
|---|---|
/graphql | Query endpoint (POST) and GraphiQL IDE (GET) |
/graphql.sdl | Schema in SDL format |
/graphql.json | Schema as JSON introspection |
Open /graphql in a browser to explore the schema interactively — the GraphiQL UI includes a schema browser, autocompletion, and query history.
Entry type naming
Section titled “Entry type naming”GraphQL types and fields are derived from your entry type configuration. Hyphens and underscores are normalized, so organizational-unit, organizational_unit, and OrganizationalUnit all produce the same schema:
| Config name | GraphQL type | Lookup field | Collection field |
|---|---|---|---|
organizational-unit | OrganizationalUnit | organizationalUnit(id: ...) | organizationalUnits(...) |
user or users | User | user(id: ...) | users(...) |
Duplicate type names that produce the same GraphQL symbol in any category (type name, lookup field, or collection field) cause startup failure. Examples: foo-bar and foo_bar collide on all three; user and users collide because both normalize to the same lookup and collection names. Names starting with digits or other invalid GraphQL characters are normalized with an underscore prefix.
Attribute names use the same normalization: hyphens and underscores become camelCase. For example, mail-Primary appears as mailPrimary in the schema and in queries. If two attributes in the same type normalize to the same GraphQL field name (e.g. a-b and aB both map to aB), startup fails with a collision error before the schema is served. Use distinct catalog attribute names so each normalizes to a unique field.
Use the GraphiQL schema explorer or fetch /graphql.sdl for exact names.
Querying entries
Section titled “Querying entries”Collection queries
Section titled “Collection queries”query { users(filter: "department=Engineering", first: 25, sort: "-modifyTimestamp") { nodes { entryDN cn mail } pageInfo { hasNextPage endCursor } }}Single entry lookup
Section titled “Single entry lookup”Look up by any identifier format:
query { user(id: "dXNlcjo5aXg") { entryDN cn mail memberOf }}Polymorphic lookup
Section titled “Polymorphic lookup”When you don’t know the entry type, use node:
query { node(id: "dXNlcjo5aXg") { ... on User { cn, mail } ... on Group { cn, member } }}Filtering
Section titled “Filtering”Filter syntax matches the REST API — use FleX, JSON, SCIM, or LDAP format:
users(filter: "department=Engineering AND active=true")Pagination
Section titled “Pagination”Relay-style cursor pagination. Use first/after to page forward:
query { users(first: 25, after: "Y3Vyc29yOjI1") { nodes { cn } pageInfo { hasNextPage endCursor } }}Use count(mode: ESTIMATED) for fast approximate counts, or count(mode: EXACT) when you need precision.
History queries
Section titled “History queries”Point-in-time lookup
Section titled “Point-in-time lookup”View an entry’s state at any moment. The at argument accepts timestamps, durations, cursors, and relative expressions like now-1h. See Temporal references for all accepted formats.
query { user(id: "dXNlcjo5aXg", at: "2024-01-15T10:00:00Z") { cn mail department }}Change feed
Section titled “Change feed”Query changes across all entry types or per-entry:
query { changes(range: "24h", type: [MODIFY], first: 100) { nodes { id type timestamp entryType data { ... on ModifyData { diff { attribute operation value } } } } pageInfo { hasNextPage endCursor } }}Filter by time range (range, since/until), event type (ADD, MODIFY, MOVE, DELETE), entry type, or affected attributes. Sort by timestamp or -timestamp.
Each entry also has a changes field for its own history:
user(id: "dXNlcjo5aXg") { changes(first: 5) { nodes { type timestamp } } }Change events support three data formats: diff, patch, and merge.
Automatic persisted queries
Section titled “Automatic persisted queries”Cache queries by hash to reduce payload size
APQ caches query strings server-side by their SHA-256 hash. Clients send the 64-byte hash instead of the full query text — typical queries run 10 KB or more, while a hash is always 64 bytes.
On first request, the client sends both the query and its hash. After that, only the hash is needed. If the cache evicts the query, the server responds with PERSISTED_QUERY_NOT_FOUND and the client re-registers it.
GET requests with the hash enable browser and CDN caching. Apollo Client, urql, and other GraphQL clients handle the APQ flow automatically. See Configuration Reference for cache size and TTL settings.
Query security
Section titled “Query security”Three layers protect against abusive queries:
| Layer | Protection |
|---|---|
| Parser limits | Reject oversized payloads before parsing |
| Depth limits | Reject deeply nested queries |
| Complexity limits | Reject queries selecting too many fields |
Defaults work for most deployments. See Configuration Reference for tuning.
Timestamp formatting
Section titled “Timestamp formatting”Timestamp fields (createTimestamp, modifyTimestamp, verifiedTimestamp) accept a format argument with presets like ISO, UNIX, UNIX_MS, RFC_1123, LDAP, or custom DateTimeFormatter patterns:
user(id: "...") { modifyTimestamp(format: "UNIX_MS") }See Signals for metrics, Error Handling for error codes.