Skip to main content
Back to Top

 

Let's Discuss: Are Learning Collaboratives Effective in Implementing Interventions for Obesity and Tobacco Dependence to Reduce Early Mortality in People with Mental Illness?

Content on this page is provided for reference purposes only. It is no longer maintained and may now be outdated.

For the July Advanced Topics in Implementation Science webinar, we were pleased to be joined by Dr. Stephen Bartels from The Dartmouth Institute. Steve’s presentation “Are Learning Collaboratives Effective in Implementing  Interventions for Obesity and Tobacco Dependence to Reduce Early Mortality in People with Mental Illness?” described health promotion interventions  for obesity and tobacco dependence in persons with mental illness. It touched on the use of health coaches, mHealth, and social media. He also shared results of a statewide learning collaborative will be discussed followed by a description of an ongoing multi-state randomized trial comparing learning collaboratives to targeted technical assistance in the implementation of an evidence-based health promotion program including organizational, provider, and patient-level outcomes. We would love to have your thoughts about next steps and opportunities to use learning collaboratives as an implementation strategy, as well as your responses to the following questions:

·         What are the most pressing scientific questions about the use of learning collaboratives?

·         Are there specific conceptual models for D&I that map onto learning collaboratives as an implementation strategy?

·         Have you used learning collaboratives in your own research or within practice?  What has been your experience with them?

We look forward to continuing the discussion on R2R!


Posts/Comments

For those that missed the

For those that missed the live session.  Here is the archive.  We hope you will view and then engage in the discussion and share your thoughts/comments here on R2R.

 

Collaboratives and My

Collaboratives and My Transformation

I have worked on quality improvement for a long time and participated in IHI’s first implementation of learning collaboratives. I strongly supported the idea from the start because it created an environment where organizations could work together to find better ways to implement change.  Our research center then became the National Program Office for the Robert Wood Johnson’s Paths to Recovery program, which was intended to reduce time to treatment and improve number of admissions and retention in treatment. We developed the NIATx approach, a simplified approach to quality improvement that was based on decades of research into organizational improvement in other industries [1]. NIATx has six key principles [2] [3]:

1.     Deeply understand your customer;

2.     Choose a project that will help the CEO accomplish her goals for the organization,

3.     Appoint a powerful and widely respected change leader;

4.     Get ideas for improvement from outside the organization (and preferably outside the industry);

5.     Conduct a series of rapid cycle pilot tests and refinements; and

6.     Measure just one or two things, but measure them well (niatx.net).

Over the next several years the NIATx approach was implemented in over 3500 addiction treatment agencies. And collaboratives were an essential part of the NIATx operation.

Because of its apparent success the National Institute of Drug Abuse funded a grant to empirically identify the essential ingredients of NIATx.  The study was a clusterized randomized trial of over 201 addiction treatment agencies in five states to four different components of quality improvement: 1) learning collaboratives, 2) monthly conference calls where quality improvement experts gave advice on how to effectively pursue QI, 3) brief coaching (one site visit followed by monthly phone calls between a consultant and the organization’s improvement leadership and 4) a combination of all of the above.  We monitored the implementation over 18 months encouraging improvement efforts in 6-month blocks (sequentially focusing on time to treatment, admissions and retention in treatment). Site visits, surveys, interviews and empirical data were used to evaluate progress.  

I was confident that either the learning collaboratives or the combination would produce the best results.  I was frankly devastated to find that the learning collaboratives did not do well at all. The brief consulting blew the other three arms of the trial away in both reducing time to treatment and increasing the number of patients served. None of the interventions led to significant changes in retention. The combination of all interventions came in second but was much slower to produce improvements. [4]

Why did coaching work and collaboratives not work nearly as well?  After regaining my composure, we looked carefully at what we had learned. First I was impressed with how little coaching was really given.  Just one day long site visit by someone with expertise in the processes that were to be improved, followed by monthly phone calls built around solving problems that were getting in the way of success (rarely there would be calls in-between those monthly ones).  Periodically, because the coaches knew what was going on in the other 49 agencies, the coach would link up (by phone) agencies to talk about how to solve a common problem.  Agencies were motivated to show progress because the coach would be calling and would share progress notes with other coaches on their own weekly calls among coaches.  

Conversely, the get togethers often demanded by collaboratives required travel and days off work.  The people who went were often a small subset of the folks working on improvement, so the “haves and have-nots” were split further apart. And the meetings were held far enough apart (every 6 months) that problems addressed at those meetings happened before or after but rarely during the time of need.

This was only one trial. And our definitions of coaching and collaboratives could be very different from those practiced elsewhere.  That is a weakness of any randomized trial. At the same time I have heard a lot of excitement about collaboratives and seen some quasi-experimental research, but nothing of this scope.  In a sense I want to be proven wrong, but now that the study is over and I have thought about it deeply, I am less impressed with the collaborative idea than I used to be, and a lot more impressed with the idea of simply using expert coaches in a very efficient, low cost way.

 

Dave Gustafson PhD

 

[1] Gustafson, D.H., & Hundt, A.S. (1995). Findings of innovation research applied to quality management principles for health care. Health Care Management Review, 20(2), 16-33.

[2] McCarty, D., Gustafson, D.H., Wisdom, J.P., Ford, J., Choi, D., Molfenter, T., Capoccia, V., & Cotter, F. (2007). The Network for the Improvement of Addiction Treatment (NIATx): Enhancing access and retention. Drug and Alcohol Dependence, 88(2-3), 138-145. PMCID: PMC1896099

[3] Gustafson, D.H., Johnson, K.A., Capoccia, V., Cotter, F., Ford II, J.H., Holloway, D., Lea, D., McCarty, D., Molfenter, T., & Owens, B. (2011). The NIATx Model: Process improvement in behavioral health. Madison, WI: University of Wisconsin- Madison.

[4] Gustafson, D.H., Quanbeck, A.R., Robinson, J.M., Ford II, J.H., Pulvermacher, A., French, M.T., McConnell, K.J., Batalden, P.B., Hoffman, K.A., & McCarty, D. (2013). Which elements of improvement collaboratives are most effective? A cluster-randomized trial. Addiction, 108(6), 1145-1157. PMCID: PMC3651751 

Thank you for your thoughtful

Thank you for your thoughtful post, Dave! I have to mull this over further, but I wonder if your collaborative could have functioned best if the coaches' roles were different.  I do not know enough about the project to make that 

Communities of practice, particularly in the virtual space, are increasingly being used by government agencies to share knowledge, tackle problems, and interact with partners, grantees, and the public across geographic locations. NCI developed this Research to Reality community of practice (R2R) with specific goals in mind:

  • to engage practitioners and researchers in an ongoing dialog,
  • to build capacity for evidence-based program planning, and
  • to foster collaborations that address the problem of dissemination and implementation.

Indeed, as we sat around the table envisioning the future, NCI imagined that a virtual community of practice could facilitate the authentic engagement of researchers and practitioners necessary to move evidence-based programs into action.

I contend that R2R has been successful in many ways:
  • the community has attracted a robust membership (2500+) from many disciplines.
  • practitioners and researchers regularly join monthly cyber-seminars and are (very) eager to showcase their work in that forum.
  • anecdotal stories and discussions posted on R2R demonstrate its potential to attract the right members and conduct capacity-building activities despite a dwindling financial climate.

And yet I struggle (programmatically) to better understand how to leverage the current site traffic to drive community engagement.  I was intrigued by the original webinar that started this conversation, but suspect that we are all to one degree or another saying the same thing --- learning communities and collaboratives SEEM intuitively to well-positioned to bring the research-to-practice gap, but I do not think that any of us have the best of sense of how to optimize that opportunity.

I would be very interested in hearing the thoughts of others.

Margaret

And yet I struggle

And yet I struggle (programmatically) to better understand how to leverage the current site traffic to drive community engagement.  I was intrigued by the original webinar that started this conversation, but suspect that we are all to one degree or another saying the same thing --- learning communities and collaboratives SEEM intuitively to well-positioned to bring the research-to-practice gap, but I do not think that any of us have the best of sense of how to optimize that opportunity.

As a specialist in online communities and as someone who is currently working with several communities of practice, I can say that this tends to be a common problem. There are a number of factors at play.

Converting Visitors to Community Members

Many of the communities of practice I work with have coupled a private, online community of practice with some kind of expert content marketing site. The site should visibly surface clear calls to action for a visitor to join the community--and not simply an appeal to "join" or "start a conversation" which is generic and implies little value but to contribute expertise or ask a question on some specific topic. The more specific the better. Ideally, the call to action (CTA) itself should be accompanied by a preview of some interesting in-community conversation about the topic that your evidence-based article is about. The user journey starts with seeing a social media promotion (a tweet, etc.) leading a visitor to your evidence-based article. The article ends with an invitation to continue the discussion by joining the online community and previewing the topic thread titles of what's there. Clicking on the discussion thread or a CTA button takes you to the discussion / or your community's registration page.

Naturally, you should be tracking this process as closely as possible using Google Analytics and/or your platform's reporting tools.

Showing Immediate Value

Getting someone into the community is only half the battle. Once they arrive, there should be extremely clear signposts about what they can do here within the first few minutes of their visit. Avoid the temptation for large banner graphics at the top of the page as so very many marketing agencies will try to sell you. That's for content marketing sites. Communities are for driving meaningful discussion and user-generated content.

Your new member should be able to see a couple of things right away. Where's the first place to post? Is there a welcome thread where new members go to introduce themselves and talk about what they are working on? Are there materials clearly linked and visible ("above the fold" as they used to say in the days of newspaper, but in web terms, links that are immediately visible and that I don't have to scroll down to see) that show highly relevant information or an easy way to browse topics critical to your field? Are the most active discussions bubbled to the top? (They ought to be). Finally, are there resource links to Frequently Asked Questions and background materials for people who may be new to the profession? 

Building Trust and Relationships

Many online communities of practice get nervous about letting the community "admin" have any presence at all. After all, the community manager often isn't a specialist or practitioner, so why should that person be contributing? In the experience of our firm, this is contrary to best practice. There is absolutely a need for there to be someone in the community who is a full-time resource and dedicated to fostering and facilitating communication between members.

This individual opens new threads for discussion as seems to be aprpopriate and provides and organizing and stabilizing influence on the online experience. What's important is for this individual to demonstrate that they are respectful of the expertise of practitioners and establishes credibility if not personal expertise for the subject matter, meaning that they call upon other members of the community to answer questions or provide help and guidance rather than trying to answer questions themselves.

Without a dedicated and visible community manager to curate, to moderate and to build trust, an organization often finds itself in a position where it is disconnected from its own community. Left to grow on its own, communities of practice often founder and struggle to provide value for those who join. Simply put, online communities almost never get off the ground without the help of a dedicated and visible community manager.

Creating a Sense of Community

Finally, the community of practice needs to establish its own sense of community so that members feel like they belong, like they are making unique contributions and that they are getting recognized and appreciated for their expertise. This is a process. And it also requires some repetition and regularity of things like community member spotlights, community news, guest posts by notable and influential community members, and special activities and events.  For communities of practice, in particular, having a particular project (an e-book, a research project, etc.) can provide a focus for members drawing them together into a common endeavor that will produce an artifact of lasting value to members.

We talk about many of these challenges on my firm's FeverBee Experts forum. It's free to join and thousands of professional community managers share their experiences and resource there each week. I invite you to share your questions there as well for additional perspective.

Thanks to all of you for

Thanks to all of you for these tremendously interesting comments. I am especially intrigued by Dave's comments about Learning Collaboratives and the findings of the RCT. As far as I know, it is one of the very few like it in behavioral health or substance use, perhaps the only one like it.

My colleagues and I have both reviewed the literature on QI Collaboratives in health and in mental health, and completed a pilot study of a LC model tailored to the public child mental service context in our state (focused on implementation of EBPs). The LC model was derived from the IHI BTS and the theoretical literature on QI Collaboratives. We streamlined activities as much as possible to address the challenges many sites face in facilitating full participation from staff members, especially those on the front line.

A few things have struck me as we have done this work.

First, in conducted the literature reviews themselves, it is no surprise that there seem to be many different LC models in their dose, content, and target outcomes. While there are positive findings in many domains, it has been difficult to discern what components have contributed to those outcomes. In addition, relatively few studies have included randomization.

Second, in the behavioral health context, there is a great need for additional studies with control groups. The work by Gustafson and colleagues described above is critical we need more studies like it. In addition to being able to distill the active intervention components, we need these studies because the context, innovations, data infrastructure, staffing structures, and outcomes are different from those in healthcare more broadly.

Third, while our pilot study did provide some important signals that LC sites performed better than control sites on implementation outcomes, our study was not randomized. We also made several observations that are in line with Dave's comments.  We were quite struck by the need for occasional targeted, tailored individual site coaching and problem-solving. While we did not study this specifically, our experience of the process suggested this was quite important and that indeed a little went a long way. The in-person meetings were very well received and were a wonderful forum for collaboration and learning. However, it was a challenge for frontline team members to participate, and I would imagine if this were scaled up, it would favor the more resourced agencies.

Lastly, I would love to see more cost analysis of some of the competing intervention frameworks. We developed a growing sense that this was a costly endeavor that may not yield robust enough differences compared to less costly individual expert coaching and phone consultation approaches. Perhaps the cross-site learning could take place through other networking opportunities or formats. 

In sum, our experience also led us to temper our enthusiasm for the use of LC models in the public sector, and we continue to need more research that tests the LC and alternate coaching models.  I would love to hear others' experiences.

Erum Nadeem, Ph.D.

Thanks so much to Dave,

Thanks so much to Dave, Margaret, Todd and Erum for sharing your experiences and your wisdom on learning collaboratives, which we know remain popular and relatively understudied.  As I digest what you've written so far, I'm encouraged by your insights in interpreting what may have led to positive results and where the design and execution of learning collaboratives have fallen short.

In a sense, I see a similar chord struck across a number of our implementation strategies--we hypothesize that a set of cohesive components that should promote implementation will outperform an alternative, but we may not go the step further in thinking more mechanistically about how the strategy would work.

In the clinical trial space, there's been a fair amount of discussion about mechanisms of action, particularly for complex interventions (which describe many implementation strategies) and I wonder the degree to which those experienced in testing learning collaboratives have mapped out what those mechanisms might be.  To make matters more complicated, I would guess that a learning collaborative may have multiple potential mechanisms that may be active or absent for a given member.  So we may be challenged not to answer "did it work and how overall?" but "did it work and how did it work for each of a range of diverse participants?"

Any thoughts about how we should design our tests of learning collaboratives to get a finer read on how and why they contribute to implementation outcomes, and how the experiences of individuals participating in collaboratives may be quite different from one another? 

We so much appreciate all of your contributions to our thinking on this.

-David

The issue of learning

The issue of learning collaboratives is very important.  

In response to Erum, Margaret and David notes: Yes more research is needed - sounds like a familiar line ;-)  

One of the common problems with any RCT is that people will say that the way X ran an intervention is not the way we ran ours.  And that is often correct. 

In the ideal world a summative evaluation should cover all possible variants of a learning collaborative.  While that is not possible, we might be able to get part of the way there.  To do so, we need to develop a way to describe different versions of learning collaboratives so that people can see where their approach fits in. Then we might be able to run studies comparing different approaches to collaboratives.

I am not sure what dimensions to use, but a few of the following might be a start:

 1) Number and type of agencies involved - ours was 50 in each arm.  But there were big differences in size and governance.

 2) Frequency of meetings - we only had 3 over 18 months.

 3) Number of people from each agency who come to the meetings - typically it was just 2 or 3 per agency.  

 4) What product was expected? - we had specific charges (e.g. reduce time from first request to first treatment)

 5) Length of the collaborative - ours was 18 months

 6) What happens after the collaborative ends - we just said thank you.

 7) What data was provided to and by the collaborative

 8) What was the background of coaches? -  some were experts in the topic; some were experts in QI.

 9) What topic is being addressed? -  ours was addiction treatment.

10)What is the involvement of coaches during and between meetings? - In the collaborative only arm, no role after.

11)WHAT ELSE?

 

We did keep track of costs (Mike French guided that work). The effectiveness vs. cost ratio was far better for coaching than any other option.

 

I should note, that before running our study we interviewed people who we in other learning collaboratives.  There as was a lot of enthusiasm when the collaboratives started but that diminished over time. And the most common complaint was that the people and agencies were just dropped after the collaborative.  “They just dumped us”.  There was little if any followup. No long term expectations or commitment. Unfortunately, we did the same thing.

 

Another point: In our study we had a lot of coaches.  I don’t remember how many but it might have been 25 or so. You could describe them in a lot of ways but one of them is that coaches were either topic specialists or specialists at facilitation.  An informal comparison led us to hypothesize that coaches who knew the topic (e.g. had actually worked in the treatment agency and dealt with the problem) were better coaches than ones who were great at process improvement or facilitation. We did not reach a definitive conclusion however. 

 

Our mechanism of action was based on a literature search that Ann Hundt (a PhD student at the time) and I did.*  We did not look at health in particular. Rather we went into decades of innovation literature and sought empirical studies that went across industries and that, at minimum, compared presence and absence of factors in successful vs. less successful innovating organizations. (A lot of studies just did regressions to identify the characteristics of successful organizations; even though unsuccessful organizations could have been good at the same characteristics. Just 6 factors kept cropping up:**

1) Deeply understand what it is like to be the customer of the organization. (More important that all other factors together.)

2) Have an irresistibly influential change leader who is deeply respected by management and staff alike

3) Be sure the innovation helps senior leaders accomplish their key goals for the year.

4) Get ideas for innovation from outside the industry (not from healthcare if you are from healthcare).

5) Try out and improve the innovation in cycles of rapid fire pilot tests (try it Monday; improve; try it again next Monday).

6) Measure only one or two things, never more than two things (otherwise everything will be about data and not change).

 

So NIATx was a pretty simple strategy but one that 3500 addiction treatment agencies claim they adopted.  Probably a gross-exaggeration.

"That’s all folks".

* Gustafson, D.H., & Hundt, A.S. (1995). Findings of innovation research applied to quality management principles for health care. Health Care Management Review, 20(2), 16-33.