Raiana API Reference
OpenAI-compatible API for medical device regulatory intelligence — MDR, IVDR, FDA, and EU AI Act
The Raiana API provides programmatic access to our regulatory AI assistants: ChatMDR, ChatIVDR, ChatFDA, and ChatAIAct. Each assistant is augmented with up-to-date regulatory texts, MDCG guidance documents, FDA guidances, and applicable standards.
The API follows the OpenAI Chat Completions format, making it easy to integrate using the OpenAI Python library, any OpenAI-compatible client, or plain HTTP requests.
Base URL
All API endpoints are relative to this base URL. For example, the chat completions endpoint is available at https://api.chatmdr.eu/v1/chat/completions.
Authentication
All requests must include an API key in the Authorization header using the Bearer token scheme:
Authorization: Bearer raikey_YOUR_API_KEY_HERE
Raiana API keys use the raikey_ prefix. You can obtain an API key from your Raiana portal account.
Endpoints
Create Chat Completion
Sends a conversation to one of the Raiana regulatory assistants and returns a response grounded in regulatory source texts.
Request Body Parameters
| Parameter | Type | Description |
|---|---|---|
| model required | string | The ID of the model to use. See Available Models below for the full list. |
| messages required | array | A list of messages comprising the conversation. Each message is an object with a role and content. Supported roles are system, developer, user, and assistant. At least one message is required. |
| max_tokens optional | integer | Maximum number of tokens to generate in the response. Must be between 100 and 4096. Defaults to the model’s configured output limit (typically 1024). |
Message Roles
| Role | Behavior |
|---|---|
system / developer |
Sets additional instructions on top of the built-in regulatory system prompt. The Raiana system prompt always takes precedence — your instructions supplement it rather than replace it. |
user |
A message from the user (the question or input). |
assistant |
A previous response from the assistant. Use alternating user and assistant messages to provide conversation history. |
Example Request
curl -X POST https://api.chatmdr.eu/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer raikey_YOUR_API_KEY_HERE" \
-d '{
"model": "chatmdr-fast-openai",
"messages": [
{
"role": "user",
"content": "What is a Class III medical device under MDR?"
}
],
"max_tokens": 1024
}'
Response Format
Responses follow the OpenAI Chat Completion format:
{
"object": "chat.completion",
"id": "825c3790-ec66-4008-8b30-99efd968707c",
"model": "chatmdr-fast-openai",
"created": 1751373000,
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Under MDR, Class III devices are the highest risk class..."
}
}
],
"usage": {
"input_tokens": 5820,
"output_tokens": 387,
"total_tokens": 6207
}
}
Response Fields
| Field | Type | Description |
|---|---|---|
object | string | Always "chat.completion". |
id | string | A unique identifier for this completion request. |
model | string | The model used to generate the response. |
created | number | Unix timestamp of when the response was generated. |
choices | array | An array containing the generated response. Always contains exactly one choice. |
choices[0].message.role | string | Always "assistant". |
choices[0].message.content | string | The generated regulatory answer, typically including references to relevant articles, MDCG guidances, or standards. |
usage | object | Token usage statistics for the request. |
usage.input_tokens | integer | Number of tokens in the prompt (including regulatory context added by Raiana). |
usage.output_tokens | integer | Number of tokens in the generated response. |
usage.total_tokens | integer | Total tokens consumed (input + output). |
List Models
Returns a list of all available models.
curl https://api.chatmdr.eu/v1/models \
-H "Authorization: Bearer raikey_YOUR_API_KEY_HERE"
Retrieve Model
Returns details about a specific model.
curl https://api.chatmdr.eu/v1/models/chatmdr-fast-openai \
-H "Authorization: Bearer raikey_YOUR_API_KEY_HERE"
Available Models
Each model targets a specific regulatory domain and comes in two tiers: Fast for quick answers, and Smart for deeper analysis.
ChatMDR — EU Medical Device Regulation
Expert on Regulation (EU) 2017/745, MDCG guidance documents, and harmonised standards for medical devices in Europe.
ChatIVDR — EU In Vitro Diagnostic Regulation
Expert on Regulation (EU) 2017/746 (IVDR), MDCG guidance documents, and standards for in vitro diagnostic devices.
ChatFDA — FDA Medical Device Regulation
Expert on 21 CFR Title 21 (Parts 800–898), FDA guidance documents, and relevant standards for medical devices in the United States.
ChatAIAct — EU AI Act
Expert on the EU Artificial Intelligence Act (Regulation (EU) 2024/1689) and its implications for AI systems, including those used in or as medical devices.
Python Example
The Raiana API is compatible with the OpenAI Python library. Simply point the base_url to the Raiana API:
from openai import OpenAI
# Initialize the client with the Raiana API endpoint
client = OpenAI(
base_url="https://api.chatmdr.eu/v1",
api_key="raikey_YOUR_API_KEY_HERE",
)
# Ask a question about the EU Medical Device Regulation
response = client.chat.completions.create(
model="chatmdr-smart-openai",
messages=[
{
"role": "user",
"content": "What are the requirements for a Quality Management System under MDR?",
}
],
max_tokens=1024,
)
# Print the assistant's answer
print(response.choices[0].message.content)
Multi-Turn Conversation
You can pass conversation history in the messages array to enable follow-up questions:
from openai import OpenAI
client = OpenAI(
base_url="https://api.chatmdr.eu/v1",
api_key="raikey_YOUR_API_KEY_HERE",
)
# Build a multi-turn conversation
messages = [
{"role": "user", "content": "Is a software that predicts disease progression a medical device under MDR?"},
{"role": "assistant", "content": "Yes, software intended for diagnosis or prediction of disease is considered a medical device under MDR Article 2(1)..."},
{"role": "user", "content": "What classification would it have?"},
]
response = client.chat.completions.create(
model="chatmdr-smart-openai",
messages=messages,
max_tokens=1024,
)
print(response.choices[0].message.content)
Using a System Prompt
You can add a system (or developer) message to provide additional instructions. These are applied on top of the built-in Raiana system prompt:
response = client.chat.completions.create(
model="chatmdr-smart-openai",
messages=[
{
"role": "system",
"content": "Always answer in German. Focus on Class IIa devices.",
},
{
"role": "user",
"content": "What are the clinical evaluation requirements?",
},
],
)
Error Handling
The API returns standard HTTP status codes. Error responses include a JSON body with a message field:
| Status Code | Meaning |
|---|---|
400 | Bad request — missing or invalid parameters (e.g., unknown model, empty messages, max_tokens out of range). |
401 | Unauthorized — missing, invalid, or malformed API key. |
402 | Payment required — insufficient token balance or no remaining questions on subscription. |
502 | Upstream error — the Raiana backend encountered an error processing your request. |
500 | Internal server error. Contact support if this persists. |
# Handle errors gracefully
from openai import OpenAI, APIError
client = OpenAI(
base_url="https://api.chatmdr.eu/v1",
api_key="raikey_YOUR_API_KEY_HERE",
)
try:
response = client.chat.completions.create(
model="chatmdr-smart-openai",
messages=[{"role": "user", "content": "What is Article 10?"}],
)
print(response.choices[0].message.content)
except APIError as e:
print(f"API error: {e.status_code} — {e.message}")
Differences from the OpenAI Chat Completions API
The Raiana API implements a focused subset of the OpenAI Chat Completions API. The following OpenAI features are not supported:
| Feature | Notes |
|---|---|
stream | Streaming responses are not supported. All responses are returned as a single complete message. |
tools / tool_choice | Function calling and tool use are not available. |
response_format | Structured Outputs and JSON mode are not available. Responses are always plain text. |
n | Multiple completions per request are not supported. Exactly one choice is always returned. |
temperature / top_p | Sampling parameters are managed server-side and cannot be overridden. |
stop | Custom stop sequences are not supported. |
frequency_penalty / presence_penalty | Token penalty parameters are not available. |
logprobs / top_logprobs | Log probabilities are not returned. |
logit_bias | Token bias modification is not available. |
seed | Deterministic sampling is not supported. |
max_completion_tokens | Use max_tokens instead (range: 100–4096). |
| Image / audio inputs | Only text messages are supported. Multi-modal inputs (images, audio) are not available. |
modalities | Only text output is supported. |
prediction | Predicted Outputs are not supported. |
web_search_options | Web search is not available. |
store / metadata | Request storage and metadata are not supported. |
service_tier | Service tier selection is not available. |
reasoning_effort | Reasoning effort control is not available. |
system or developer messages you send are treated as supplementary instructions — they cannot override the core regulatory behavior.
Rate Limits & Billing
Usage is tracked per API key. Your token balance is deducted based on the usage field returned with each response. The input token count includes the regulatory context that Raiana injects, so it will be higher than just your message text.
Check your remaining balance and manage your API keys in the Raiana Portal.