UCSF ASTP EHR API Survey Statement B Final_for ROCIS

UCSF ASTP EHR API Survey Statement B Final_for ROCIS.docx

National Survey of Digital Health Companies

OMB:

Document [docx]
Download: docx | pdf
  1. Statistical Methods

 

Respondent Universe and Sampling Method 


This data collection effort targets U.S.-based digital health companies that develop products enabling human interaction with application programming interfaces (APIs) for commercial electronic health record (EHR) integration. Eligible products include provider-facing applications that access clinical data from EHRs and patient-facing applications that access clinical or non-clinical health data.


The sampling frame for a previous 2022 survey of digital health companies was constructed through systematic data scraping of public EHR app galleries and was supplemented with companies identified from additional sources. These additional sources included the 2020 CB Insights report “The Digital Health Startups Transforming the Future of Healthcare,” analysis of software reported through the ONC Health IT Certification Program, recommendations from an external Technical Expert Panel (TEP), and self-nominated companies that met eligibility criteria during survey fielding. That effort collected responses from 141 companies and provided the first national dataset on API adoption and EHR interoperability among digital health companies.


For the current survey administration, University of California, San Francisco (UCSF) will survey eligible companies that responded to the 2022 survey, along with an equal number of additional eligible companies. The list of prior respondents was updated to reflect acquisitions, mergers, or company closures. Additional companies will be randomly selected from a pool of prior non‑respondents and newly identified companies meeting the inclusion criteria. To update the sampling frame, UCSF removed inactive companies and products, accounted for changes due to mergers and acquisitions, and added newly relevant companies, including those focused on generative artificial intelligence (AI). New companies were identified by applying defined inclusion and exclusion criteria, using resources such as FDA databases, the 2024 CB Insights report, Elion, and leveraging networks from project partners ScaleHealth and Clinovations.


For each new company, UCSF attempted to identify the most appropriate contact (e.g., Computer and Information Systems Managers) through web searches and direct outreach. Partner networks and members of the TEP from the previous survey were leveraged to establish warm contacts to increase the likelihood of response.


This approach ensures a balanced sample of returning respondents and new participants, enabling longitudinal analyses while expanding representation to include other companies and those which entered the market since the prior survey.


Procedures for the Collection of Information  


UCSF developed a survey website that provides potential participants with information about the survey research goals, insights from the previous iteration, and allows companies to pre-register to receive the survey. To pre-register, companies fill out a pre-registration Qualtrics survey that collects information about their product(s) to ensure appropriate inclusion criteria, as well as contact information for the individual who will take the survey.


Initial outreach will be sent to all contacts in the sampling frame via Constant Contact 2-4 weeks before survey launch. Warm contacts will be informed in advance and will have the opportunity to share the Constant Contact email with their contacts at companies within the sampling frame. This communication will inform potential participants about the upcoming survey distribution and will provide a link to the project website for pre-registration. Using Constant Contact also will enable the identification and resolution of any invalid email addresses prior to survey dissemination.


The final survey instrument has been programmed into the online survey tool Qualtrics©. This tool was used for past surveys and has strong capabilities to support complex survey design (e.g., branching logic) as well as respondent communication and tracking. UCSF tested the tool to ensure accuracy of branching and skip logic, accuracy of piped text, clarity of question display, and adherence to other survey usability guidelines. Testing was also performed by contacts at several companies. The sampling frame contact list will be loaded into Qualtrics to support communication and response tracking (i.e., by generating a unique survey link for each target respondent).  


UCSF will follow its standard survey data collection methodology, modified based on any changes requested by ASTP. The survey will then be distributed via Qualtrics. Using Qualtrics functionality, UCSF will track responses and update response status in a separate Microsoft Excel spreadsheet. As surveys are completed, responses will be reviewed for completeness to ensure high‑quality data for analysis.


Methods to Maximize Response Rates and Deal with Nonresponse   

As a first line of maximizing participation, we will leverage our prior survey’s TEP to inform their warm contacts of the survey and promote participation. Draft language will be provided to TEP members for them to use to reach out to their warm contacts within the sampling frame. Warm contact outreach by ScaleHealth and Clinovations will ensue in the same manner.

To supplement direct outreach, the survey will also be promoted through professional forums, industry listservs, and digital health communities. Planned distribution channels include Health Tech Nerds (HTN), Elion, and the networks of ScaleHealth and Clinovations. In-person promotion will also be conducted at relevant industry conferences (e.g., Remote Patient Monitoring Summit, Health Datapalooza, etc.), including the distribution of printed business cards with QR codes linking directly to the pre-registration website. UCSF previously used this strategy during the 2022 HLTH Conference, which resulted in additional survey completions. Respondents may elect to have their company acknowledged in public-facing reports and peer-reviewed publications. Individuals who complete the survey by a designated date will also be considered for participation in panels at national health technology conferences. The project team will identify appropriate conferences and submit proposals during the project period to provide opportunities for respondents to share experiences and perspectives related to EHR API adoption, integration, and digital health innovation.

Once the survey is disseminated, a schedule of Qualtrics-generated reminders, as described above, will be sent to non-respondents and partial respondents using varied messaging. UCSF will monitor survey activity to identify respondents who started but not completed the instrument and will follow up individually to determine whether clarification or support is needed to facilitate completion. After four to six Qualtrics-generated nudges, the UCSF study team, ScaleHealth and Clinovations will initiate additional outreach. The contact database will be reviewed collaboratively and updated regularly to identify additional connections. Broader outreach through industry networks and TEP members will also occur throughout the survey fielding period. Outreach messages will be tailored by recipient type (e.g., previous survey participant, new contact, warm referral) to increase the likelihood of engagement. The anticipated data collection period is expected to be six months, consistent with prior survey cycles. Personalized outreach and the use of existing professional networks have previously proven effective, and the study team will continue this approach by identifying direct points of contact and leveraging partner relationships to facilitate engagement with eligible organizations.

To address item-level nonresponse, UCSF will track the frequency of skipped or incomplete responses and flag any items with substantially lower response rates (i.e., ≥10 percentage points below module averages). Where appropriate, follow-up will be conducted to clarify responses or encourage supplemental completion. These strategies have previously enabled UCSF to achieve response rates significantly above industry norms and are expected to be similarly effective in the current effort.

Tests of Procedures or Methods to be Undertaken 

UCSF and ASTP pilot tested the survey in collaboration with project partners and subject matter experts. The goals of the pilot testing were to estimate the time required for completion of the survey (for OMB burden estimate), identify any potential concerns that could impede survey completion (e.g., selection of mandatory questions), verify the branching logic, and obtain input on the clarity of the instrument.

A total of seven digital health companies participated in the pilot. These companies were selected to represent a mix of prior survey participants and new entrants to the digital health ecosystem. Participants were identified through the project’s TEP and partner networks, including ScaleHealth and Clinovations

Each pilot participant was provided with the survey instrument and instructions to complete it, record the time required, and note any items that were confusing or burdensome. Following completion, UCSF conducted individual interviews with respondents in cases where additional clarification was needed. The goals of the interview were to ensure: (1) respondents understood the questions as intended, and (2) the questions were written in a manner that respondents could answer. Revised survey items were shared with pilot testers to solicit additional input on proposed changes and ensure clarity was improved. After each round of pilot testing, UCSF met with ASTP and project partners to review findings and discuss recommended modifications to the instrument.

Survey Content:  


The previous survey in 2022 assessed the early impacts of the 21st Century Cures Act regulations and evaluated the state of API adoption and interoperability among digital health companies integrating with commercial EHRs. It also identified several ongoing challenges, including high fees, limited availability of realistic clinical testing data, and gaps in the availability of key data elements. The current iteration will generate the first longitudinal dataset to track changes in API adoption and interoperability over time. The survey will evaluate the use of standards-based versus proprietary APIs, examine company experiences with specific EHR vendors, and identify enablers and barriers to API-based commercial EHR integration. The resulting data will provide insights to inform federal health IT policy and support efforts to enhance interoperability and improve API implementation across the digital health ecosystem.


The current survey instrument was created following a comprehensive review of the prior survey and consultation with project partners and subject matter experts. Items from the prior survey were evaluated to determine their continued relevance and utility. Questions were removed if they no longer reflected current priorities, did not generate valuable data for analysis or derivative products, had low response rates, or were associated with high respondent drop-off. A preliminary analysis of item-level missingness and drop-off informed these decisions.


New questions were added to capture emerging topics of interest. For example, additional items were developed to assess integrations supporting AI-enabled use cases. Feedback from ASTP and other subject matter experts was incorporated to further refine the survey’s content and structure. This process ensured the revised instrument retained high-value questions, incorporated new areas of relevance, and removed low-value or burdensome items to improve response quality and reduce respondent burden.


Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data 

The information for this study is being collected by the UCSF Division of Clinical Informatics and Digital Transformation (DoC-IT), on behalf of ASTP. With ASTP oversight, UCSF is responsible for the study design, instrument development, data collection, data analysis, and report preparation. The survey instrument and plans for statistical analysis were developed by Dr. Benjamin Rosner and Dr. Julia Adler-Milstein, in collaboration with ASTP and project partners. The staff team is composed of Dr. Benjamin Rosner (Principal Investigator), Paige Welikson (Project Manager/Research Assistant), Aditi Sriram (Project Manager), and Christopher Toretsky (Data Analyst). Primary project contact information is provided below.

Name 

Email 

Number 

Dr. Benjamin Rosner

[email protected]

415 502-1371

Paige Welikson

[email protected]



File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorMicrosoft Office User
File Modified0000-00-00
File Created2025-09-19

© 2026 OMB.report | Privacy Policy