Skip to main content


Adaptive Software as Medical Devices (SaMDs) play an increasingly critical role within clinical settings, assisting physicians with illness detection, diagnosis, and analysis. Use of Artificial Intelligence/Machine Learning (AI/ML) techniques, such as deep machine learning and neural networks, lends adaptive SaMDs unparalleled analytical power, but not without risks. Adaptive SaMDs are typically “black-box,” meaning that they compute data such that no one can determine how it rendered outputs. “Transparency,” in the form of explainability, is frequently raised in policy discussions as a solution to track when the SaMD has erred in computing outputs. The FDA, in seeking to uphold safety and efficacy, has recently released preliminary proposals that express a heavy need for pre-market transparency, specifically for adaptive SaMDs’ internal systems and operations. This Article argues that although some transparency is important to establish the initial trust in the SaMD, the goal of transparency being close to explainability is nonetheless a misplaced regulatory focus. The FDA should instead focus on developing a resilient and secure data-sharing infrastructure to facilitate high-quality data inputs for the SaMD to form complete and accurate medical predictions. Furthermore, the FDA should invest in detailing a robust post-market surveillance system that eases and incentivizes data volunteering.


File nameDate UploadedVisibilityFile size
18 May 2023
336 kB



  • Subject
    • Intellectual Property Law

    • Medical Jurisprudence

    • Science and Technology Law

  • Journal title
    • Boston College Intellectual Property and Technology Forum

  • Volume
    • 2023

  • Pagination
    • 1-21

  • Date submitted

    18 May 2023