Systematic Screening and Assessment Method

As part of our work on the Disseminating CCLI Educational Innovations project, Flora McMartin suggested I read about the Systematic Screening and Assessment (SSA) Method as a theoretical framework to help us determine how we might select which NSF CCLI projects might be ready for dissemination.

(A bit of background, this project’s goal is to answer the question, “How can NSF and successful CCLI grantees foster better dissemination of CCLI-developed educational innovations?” We’ve conducted a survey and held a workshop to gather information about the issues involved. We originally set out to develop a guide, but our survey and workshop have lead us down a different path.)

Here are my thoughts about the method.

  • The method strikes me as a fancy way of setting up a multi-stage triage method. All wrapped in a fancy name.
  • Translating the method for our work, “Do we think we can identify all possible criteria for deciding if an educational inovation might be effective and therefore should be disseminated?”
  • Aside: Perhaps we could adopt a couple stages of the Evaluability Assesment Method, just as well here.
  • As an estimation of scale, the researchers suggest that 15-25% of the initially recommended programs (going into the intial stages of the SSA Method) are likely to be promising (with all the caveats on what that might mean). That’s an interesting heuristic to keep in mind.
  • The method developers contend that SSA is an effective means to “translate research into practice” and that it’s more cost effective than undergoing full evaluation studies on the projects.

Reference

Leviton, L. C., & Gutman, M. A. (2010). Overview and rationale for the Systematic Screening and Assessment Method. In L. C. Leviton, L. Kettel Khan, & N. Dawkins (Eds.), The Systematic Screening and Assessment Method: Finding innovations worth evaluating. New Directions for Evaluation, 125, 7–31.

1 reply
  1. Laura Leviton
    Laura Leviton says:

    Could not resist “peeking” at what you had to say about this work. I am pleased that you believe the SSA method might offer some value. It would be good to clarify a few points however.

    Regarding “the fancy way of setting up a multi-stage triage method. All wrapped in a fancy name.” You might reflect on why we felt the need to offer a fancy name to what we were careful to acknowledge was “merely a new combination of existing strategies.” We felt called upon to name this method, because it is so extraordinarily rare in evaluation for any such process to be undertaken. For this reason we felt compelled to encourage name recognition and stimulate discussion of this issue (see American Evaluation Association thought leaders discussion of late 2010 by Mel Mark, former AEA president, on why so many evaluands are just not worth evaluating.)

    You mention “triage methods” for evaluation and for identifying potentially effective programs, as though these methods were commonplace or well-understood. Let me assure you that these are few, far between, and not systematic. After extensive querying from colleagues such as Tom Cook, Robert Boruch and Eleanor Chelimsky (former director of evaluation at GAO), we present the other two that we are aware of from the evaluation literature: the school improvement literature and the SWAT method developed by CDC. If you are aware of other triage methods for this purpose it would be very interesting to know of them. It would certainly benefit my evaluation colleagues, who are not aware of them.

    Regarding our estimate of 15 to 25% of initially nominated and screened projects: this may not have much relevance for CCLI projects because the populations of projects are so very different. Please bear in mind that our estimate derived from a brand new, poorly understood field of work with an extraordinarily limited science base. In contrast, CCLI we presume to have a longer history of development and to be better understood. This might imply a higher rate of return from a nominations process, although the CCLI field would still have the potential problems of poor implementation and unwillingness to undergo evaluation. Nevertheless, it would be an interesting empirical exercise to determine likely promise in a sample of projects from the CCLI field. Some of our stated criteria for promise would apply in that field, but others (such as reach into the targeted populations) would not.

    In another post you mention evaluability assessment as a way of triaging such projects. It is not “a” way, it predates all other systematic pre-evaluation methods, dating from the mid-1970s. Its merits and limitations are therefore particularly well-understood. For further information see Shadish, Cook and Leviton, Foundations of Program Evaluation, 1991 Sage or our review article in Annual Review of Public Health, 2010. Trevisan is ok, but somewhat limited in presentation.

    Again, thanks for your interest in this work. I hope you will stay in touch, as it will be good to know which of these triage methods your team eventually selects, and the eventual payoff for CCLI, an important area of work.

    Laura C. Leviton, PhD
    Special Adviser for Evaluation
    The Robert Wood Johnson Foundation
    Princeton, New Jersey

Comments are closed.