Raiana API Reference

OpenAI-compatible API for medical device regulatory intelligence — MDR, IVDR, FDA, and EU AI Act

The Raiana API provides programmatic access to our regulatory AI assistants: ChatMDR, ChatIVDR, ChatFDA, and ChatAIAct. Each assistant is augmented with up-to-date regulatory texts, MDCG guidance documents, FDA guidances, and applicable standards.

The API follows the OpenAI Chat Completions format, making it easy to integrate using the OpenAI Python library, any OpenAI-compatible client, or plain HTTP requests.

Base URL

https://api.chatmdr.eu/v1

All API endpoints are relative to this base URL. For example, the chat completions endpoint is available at https://api.chatmdr.eu/v1/chat/completions.

Authentication

All requests must include an API key in the Authorization header using the Bearer token scheme:

Authorization: Bearer raikey_YOUR_API_KEY_HERE

Raiana API keys use the raikey_ prefix. You can obtain an API key from your Raiana portal account.

Keep your key secret. Do not expose API keys in client-side code or public repositories. Your key is used to track usage and bill against your token balance.

Endpoints

Create Chat Completion

POST   /v1/chat/completions

Sends a conversation to one of the Raiana regulatory assistants and returns a response grounded in regulatory source texts.

Request Body Parameters

ParameterTypeDescription
model required string The ID of the model to use. See Available Models below for the full list.
messages required array A list of messages comprising the conversation. Each message is an object with a role and content. Supported roles are system, developer, user, and assistant. At least one message is required.
max_tokens optional integer Maximum number of tokens to generate in the response. Must be between 100 and 4096. Defaults to the model’s configured output limit (typically 1024).

Message Roles

RoleBehavior
system / developer Sets additional instructions on top of the built-in regulatory system prompt. The Raiana system prompt always takes precedence — your instructions supplement it rather than replace it.
user A message from the user (the question or input).
assistant A previous response from the assistant. Use alternating user and assistant messages to provide conversation history.

Example Request

curl -X POST https://api.chatmdr.eu/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer raikey_YOUR_API_KEY_HERE" \
  -d '{
    "model": "chatmdr-fast-openai",
    "messages": [
      {
        "role": "user",
        "content": "What is a Class III medical device under MDR?"
      }
    ],
    "max_tokens": 1024
  }'

Response Format

Responses follow the OpenAI Chat Completion format:

{
  "object": "chat.completion",
  "id": "825c3790-ec66-4008-8b30-99efd968707c",
  "model": "chatmdr-fast-openai",
  "created": 1751373000,
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Under MDR, Class III devices are the highest risk class..."
      }
    }
  ],
  "usage": {
    "input_tokens": 5820,
    "output_tokens": 387,
    "total_tokens": 6207
  }
}

Response Fields

FieldTypeDescription
objectstringAlways "chat.completion".
idstringA unique identifier for this completion request.
modelstringThe model used to generate the response.
creatednumberUnix timestamp of when the response was generated.
choicesarrayAn array containing the generated response. Always contains exactly one choice.
choices[0].message.rolestringAlways "assistant".
choices[0].message.contentstringThe generated regulatory answer, typically including references to relevant articles, MDCG guidances, or standards.
usageobjectToken usage statistics for the request.
usage.input_tokensintegerNumber of tokens in the prompt (including regulatory context added by Raiana).
usage.output_tokensintegerNumber of tokens in the generated response.
usage.total_tokensintegerTotal tokens consumed (input + output).

List Models

GET   /v1/models

Returns a list of all available models.

curl https://api.chatmdr.eu/v1/models \
  -H "Authorization: Bearer raikey_YOUR_API_KEY_HERE"

Retrieve Model

GET   /v1/models/{model_id}

Returns details about a specific model.

curl https://api.chatmdr.eu/v1/models/chatmdr-fast-openai \
  -H "Authorization: Bearer raikey_YOUR_API_KEY_HERE"

Available Models

Each model targets a specific regulatory domain and comes in two tiers: Fast for quick answers, and Smart for deeper analysis.

ChatMDR — EU Medical Device Regulation

Expert on Regulation (EU) 2017/745, MDCG guidance documents, and harmonised standards for medical devices in Europe.

chatmdr-fast-openai
MDRFast
chatmdr-smart-openai
MDRSmart

ChatIVDR — EU In Vitro Diagnostic Regulation

Expert on Regulation (EU) 2017/746 (IVDR), MDCG guidance documents, and standards for in vitro diagnostic devices.

chativdr-fast-openai
IVDRFast
chativdr-smart-openai
IVDRSmart

ChatFDA — FDA Medical Device Regulation

Expert on 21 CFR Title 21 (Parts 800–898), FDA guidance documents, and relevant standards for medical devices in the United States.

chatfda-fast-openai
FDAFast
chatfda-smart-openai
FDASmart

ChatAIAct — EU AI Act

Expert on the EU Artificial Intelligence Act (Regulation (EU) 2024/1689) and its implications for AI systems, including those used in or as medical devices.

chataiact-fast-openai
AI ActFast
chataiact-smart-openai
AI ActSmart
Fast vs. Smart: The Fast tier uses minimal reasoning overhead for quick responses. The Smart tier applies deeper analysis, which may take a little longer but can produce more thorough answers for complex regulatory questions.

Python Example

The Raiana API is compatible with the OpenAI Python library. Simply point the base_url to the Raiana API:

from openai import OpenAI

# Initialize the client with the Raiana API endpoint
client = OpenAI(
    base_url="https://api.chatmdr.eu/v1",
    api_key="raikey_YOUR_API_KEY_HERE",
)

# Ask a question about the EU Medical Device Regulation
response = client.chat.completions.create(
    model="chatmdr-smart-openai",
    messages=[
        {
            "role": "user",
            "content": "What are the requirements for a Quality Management System under MDR?",
        }
    ],
    max_tokens=1024,
)

# Print the assistant's answer
print(response.choices[0].message.content)

Multi-Turn Conversation

You can pass conversation history in the messages array to enable follow-up questions:

from openai import OpenAI

client = OpenAI(
    base_url="https://api.chatmdr.eu/v1",
    api_key="raikey_YOUR_API_KEY_HERE",
)

# Build a multi-turn conversation
messages = [
    {"role": "user", "content": "Is a software that predicts disease progression a medical device under MDR?"},
    {"role": "assistant", "content": "Yes, software intended for diagnosis or prediction of disease is considered a medical device under MDR Article 2(1)..."},
    {"role": "user", "content": "What classification would it have?"},
]

response = client.chat.completions.create(
    model="chatmdr-smart-openai",
    messages=messages,
    max_tokens=1024,
)

print(response.choices[0].message.content)

Using a System Prompt

You can add a system (or developer) message to provide additional instructions. These are applied on top of the built-in Raiana system prompt:

response = client.chat.completions.create(
    model="chatmdr-smart-openai",
    messages=[
        {
            "role": "system",
            "content": "Always answer in German. Focus on Class IIa devices.",
        },
        {
            "role": "user",
            "content": "What are the clinical evaluation requirements?",
        },
    ],
)

Error Handling

The API returns standard HTTP status codes. Error responses include a JSON body with a message field:

Status CodeMeaning
400Bad request — missing or invalid parameters (e.g., unknown model, empty messages, max_tokens out of range).
401Unauthorized — missing, invalid, or malformed API key.
402Payment required — insufficient token balance or no remaining questions on subscription.
502Upstream error — the Raiana backend encountered an error processing your request.
500Internal server error. Contact support if this persists.
# Handle errors gracefully
from openai import OpenAI, APIError

client = OpenAI(
    base_url="https://api.chatmdr.eu/v1",
    api_key="raikey_YOUR_API_KEY_HERE",
)

try:
    response = client.chat.completions.create(
        model="chatmdr-smart-openai",
        messages=[{"role": "user", "content": "What is Article 10?"}],
    )
    print(response.choices[0].message.content)
except APIError as e:
    print(f"API error: {e.status_code} — {e.message}")

Differences from the OpenAI Chat Completions API

The Raiana API implements a focused subset of the OpenAI Chat Completions API. The following OpenAI features are not supported:

FeatureNotes
streamStreaming responses are not supported. All responses are returned as a single complete message.
tools / tool_choiceFunction calling and tool use are not available.
response_formatStructured Outputs and JSON mode are not available. Responses are always plain text.
nMultiple completions per request are not supported. Exactly one choice is always returned.
temperature / top_pSampling parameters are managed server-side and cannot be overridden.
stopCustom stop sequences are not supported.
frequency_penalty / presence_penaltyToken penalty parameters are not available.
logprobs / top_logprobsLog probabilities are not returned.
logit_biasToken bias modification is not available.
seedDeterministic sampling is not supported.
max_completion_tokensUse max_tokens instead (range: 100–4096).
Image / audio inputsOnly text messages are supported. Multi-modal inputs (images, audio) are not available.
modalitiesOnly text output is supported.
predictionPredicted Outputs are not supported.
web_search_optionsWeb search is not available.
store / metadataRequest storage and metadata are not supported.
service_tierService tier selection is not available.
reasoning_effortReasoning effort control is not available.
Note on system prompts: Unlike the standard OpenAI API, Raiana models come with a built-in regulatory system prompt that is always active. Any system or developer messages you send are treated as supplementary instructions — they cannot override the core regulatory behavior.

Rate Limits & Billing

Usage is tracked per API key. Your token balance is deducted based on the usage field returned with each response. The input token count includes the regulatory context that Raiana injects, so it will be higher than just your message text.

Check your remaining balance and manage your API keys in the Raiana Portal.