
Author
Sena Samur Duysal
Clinical Data Lead | AI-Assisted Ultrasound
Writing about clinical data operations, regulatory-ready AI workflows, and practical lessons from ultrasound-focused product development.
Blog
Long-form writing on clinical AI delivery, ultrasound workflows, and validation strategy in real-world settings.

Author
Clinical Data Lead | AI-Assisted Ultrasound
Writing about clinical data operations, regulatory-ready AI workflows, and practical lessons from ultrasound-focused product development.
Featured Article
February 2, 2026 · 14 min read
Many teams begin by collecting as much data as possible and then search for a claim later. In regulated clinical products, that sequence usually creates expensive rework.
A stronger approach is to define intended use, target population, and clinical context first. This gives a stable boundary for protocol design and model evaluation.
When claim architecture is explicit, the team can decide what evidence is required, what subgroup behavior must be analyzed, and which failure conditions must be monitored.

January 19, 2026 · 13 min read
Even with a clear protocol, site behavior diverges. Device configuration habits, staffing changes, and local training differences all create drift.
Teams often discover this after model training has already begun, when correction is expensive and timelines are already committed.
The core issue is not data quantity. The issue is whether captured data remains comparable enough to support reliable evaluation and decision making.

December 5, 2025 · 12 min read
Annotation governance usually starts strong and then erodes as teams scale. Edge cases increase, new reviewers join, and informal interpretations spread.
The result is hidden disagreement that silently degrades training quality and eventually fragments model behavior across cohorts.
Because the drift is gradual, teams often misdiagnose the problem as model architecture weakness instead of label quality instability.

October 11, 2025 · 11 min read
DICOM metadata records acquisition context that the model implicitly depends on. Ignoring this context creates unstable training behavior.
Minor shifts in settings can alter feature distributions and reduce comparability across cohorts.
Without metadata controls, teams cannot clearly explain why validation performance changed between cycles.

August 24, 2025 · 10 min read
Many dashboard programs fail because they optimize for presentation quality rather than operational clarity.
Clinical AI teams need views that expose bottlenecks, quality risks, and ownership gaps quickly.
The right design principle is simple: every chart should map to a decision and an accountable owner.

June 10, 2025 · 12 min read
Programs often move to training or evaluation with unresolved assumptions because timelines are tight and ownership is fragmented.
A readiness gate protects against this by forcing explicit confirmation of core dependencies.
It also creates accountability because every gate result can be signed off and traced to responsible roles.

Page 1 of 1