Join evidence
Inspect compatibility, relationship plausibility, and supporting signals before accepting joins.
Documentation
This documentation is intentionally product-usage focused. It covers onboarding, project setup, review workflow, outputs, and deployment choices without exposing proprietary design plans, private APIs, or internal operational details.
Understand what MEDMAPPER AI STUDIO is designed to do, who it is for, and how teams typically adopt it.
Read sectionGuideCreate a project, connect a source system, choose a target model, and move into schema discovery and review.
Read sectionGuideLearn how mappings, joins, evidence, validation, and approval states work together during delivery.
Read sectionGuideCompare SaaS and customer-deployed data plane options at a high level without exposing internal implementation details.
Read sectionGetting started
MEDMAPPER AI STUDIO helps healthcare data teams move from source-system schemas to reviewed, validated target-model outputs. It is designed for workflows where speed matters, but so do evidence, auditability, and human review.
The platform is a fit when your team needs to map healthcare data into standard or custom target models such as OMOP, PCORnet, FHIR, or other governed analytical structures. It is also a fit when stakeholders need to inspect how suggestions were made before trusting them downstream.
Create your first project
After setup, the project moves into schema discovery, domain detection, join inference, and mapping review. The exact timing depends on the source complexity and your review posture, but the product experience stays centered on one governed workflow.
Review workflow
MEDMAPPER is designed around the idea that AI suggestions should be explainable and reviewable. The platform surfaces candidate joins, mapping proposals, and validation findings in workflows built for structured review rather than black-box automation.
Inspect compatibility, relationship plausibility, and supporting signals before accepting joins.
Review source expressions, confidence states, transform intent, and status in one dense surface.
Trace how source-to-target paths work and resolve blocking issues before delivery.
In practice, most teams use MEDMAPPER to focus human review where the risk is highest while allowing stronger evidence-backed suggestions to move faster through the workflow.
Outputs and delivery
MEDMAPPER produces downstream outputs only after the relevant work has been reviewed and validated. The platform is designed so delivery reflects governed project state rather than incomplete drafts.
The practical rule for users is simple: move through review and validation first, then generate and publish outputs once the project is ready for delivery.
Choose an LLM
When teams evaluate an LLM for MEDMAPPER workflows, the goal is not to find the most creative model. The goal is to find a model that behaves predictably in schema interpretation, produces structured outputs consistently, and fits the deployment and governance requirements of the environment.
Prefer models that follow instructions consistently, handle structured schema context well, and are less likely to invent unsupported mappings or relationships.
Choose a provider and deployment path that fits your organization’s data-handling requirements, review standards, and enterprise approval process.
In practice, many teams start by selecting the LLM provider that best fits their compliance and deployment posture, then narrow the decision by evaluating how well a short list of models performs on real schema-to-target-model review tasks.
Deployment options
Best when you want faster onboarding, a centralized product experience, and a direct path to evaluation for mapping and review workflows.
Best when your organization needs stricter runtime boundaries while keeping a common MEDMAPPER workflow for discovery, review, and delivery.
This public documentation does not expose low-level architectural controls or private implementation specifics. For deployment evaluation, use a product demo or architecture review with your team.
FAQ
No. It is limited to safe, public-facing product usage guidance.
Yes. This page is meant to help teams understand how to get started and how the review-driven workflow operates.
Use the product demo and security review paths rather than relying on public documentation for environment-specific decisions.