I just got back from a great meeting with my friends at the Canadian Best Practices Initiative (CBPI) at the Public Health Agency of Canada (PHAC). PHAC is in the process of transforming its Centre for Chronic Disease Prevention and Control (CCDPC) and this meeting was an opportunity to see how the CBPI fit within the new structure. Look for good things to come out of this centre in the near future.
Like clockwork, the issue was raised of whether the Canadian Best Practices Portal (CBPP) should open itself up to other, less rigorously evaluated practice examples and I strenuously encouraged their inclusion. I have been a strong supporter of the development of the CBPP for about the last 5 years but the concept of “Best Practices” in public health has always seemed a bit slippery.
The idea of using the best available evidence is unimpeachable, of course, but, tested against questions like “for who”, “where” and “when”, Best Practices are often exposed to be fairly limited opportunities. Best Practices probably do exist but they are probably not the broad spectrum panacea that they are often portrayed.
The internet allows ready access to a host of Best Practice repositories and this plenitude exposes another challenge with the concept: “Best” very much depends on the person or organization in the judge’s seat. Were a given piece of practice truly the “Best”, one would expect it to propagate like a virus across multiple sites. This is rarely the case. This isn’t terribly surprising, though. Different sites have different funders and each funder has its own focus and drivers. It is interesting to consider that Best Practices might be kept from going viral precisely because the various repositories are unintentionally quarantined from one another. It seems to me that we need to start practicing a little unsafe Knowledge Exchange to get things going!
As with all contagious entities/ ideas, however, one needs to be careful. We want some ideas to thrive and we want to drop the bomb on others. The evidenced based practice movement grew in response to the tainted wells of ad hoc and/or outdated practice. While we all would agree that we want to propagate the best possible ideas, what if ad hoc and instinctive evidence is all that is available to support a particular piece of practice in a particular context? What then? By definition, we would be in possession of the best available evidence, but would we feel confident that we could inoculate our children with it?
Maybe yes and maybe no. You clearly wouldn’t want to stake your $1M public health budget on an untested idea but it probably wouldn’t hurt to run a couple of little pilot projects to test the water. It is only through performing these natural experiments that we can truly generate the volume of evidence needed to figure out what works, for who and in which context. But who is going to perform these experiments? Who will collect and analyze the data? The CAPTURE Project thinks that, by building a web based evaluation platform, on the ground practitioners will be able to gain insights about what works –and, more importantly, probably won’t work– for them in their context. CAPTURE hopes to help engender a culture of testing and recording the adaptation of practice on the ground so that practitioners can learn from their own work and they can share these findings with the broader public health community. For my money, this is just the kind of “unsafe” knowledge exchange that would help the field move to a more modern, collaborative and practitioner-focused form of knowledge engagement.