Since the Research-tested Intervention Programs (RTIPs) website was launched in 2003, there have been over 180 evidence-based programs posted. Unfortunately, we are currently unable to know whether users are implementing the program in its entirety or only using some of the intervention components to adapt the program within their setting for their target population. In particular, this brings into question what impact there might be on individual outcomes of varied levels of fidelity to the original intervention program. We know there are contextual factors that an implementer encounters that impact the delivery of the intervention program in its entirety. Some factors include the following: availability of trained staff, resources availability, cost for program sustainability, and the demographic characteristics of the targeted audience.
Given these factors, the effectiveness of the intervention may be quite different when delivered in the real-world setting to the targeted population. A balance is often needed to maintain a level of fidelity of the intervention to be delivered as designed while allowing flexibility for adaptation across settings and populations. Thus, it might be helpful for the developer of the intervention program to help provide guidance on which components could be used or not used and still maintain fidelity of the intervention program.
For example, imagine an evidence-based program which aims to increase screening rates for colorectal, cervical, breast, and prostate cancers among Asian and Pacific Islander populations. The intervention is implemented by lay patient navigators from the local community, who receive initial training and subsequently participate in quarterly trainings to further improve their skills. There are also ten intervention components required to implement the program, in which one of the components is creating a database for patient tracking. The implementer may not have the funds to train the lay patient navigators or the expertise to create the patient tracking database and sustain it; this may require the implementer to make adaptations to the program in order for it to be delivered at all. Moreover, the original setting where the intervention program was tested may have unique characteristics leaving questions as to whether it is as effective in other settings or for other targeted audiences.
It is our hope that with the recent user review feedback feature added for each posted program, we will have a better understanding from our community of how evidence-based programs are being adapted and implemented within a setting and for the target population. This is an area of knowledge within Implementation Science that we believe needs to be expanded, given the many contexts and populations where evidence-based interventions can be implemented.
We invite those of you who have implemented or adapted an evidence-based program from the RTIPs website to join in the discussion and share your thoughts and experiences about the challenges of fidelity and adaptation and what next steps we should take to support implementation.