Solving
reproducibility

Imagine if we can codify the world’s evidence base and learn what works, and what doesn’t. We could have greater confidence in what works — by solving the reproducibility problem. The issue isn’t really about reproducing results — we only really want to do that so we can have confidence that a particular solution will work more than once.

But we can solve that problem in a different way — instead of trying to prove that a program worked and can work again — we can instead tease out and understand the underlying DNA of the program, and program’s like it. In this way we’re not reproducing a program, we’re isolating the variables that drive program results so that we can reproduce those variables.

Democratizing
evaluation

Empowering all practitioners to use evaluation tools and access evidence, rather than hiring consultants or waiting for experts to tell them if they’re effective and leveling the Playing Field so all programs could be evaluated on equal footing.

Not just because they’re big or were funded in the past.

Rationalizing the
allocation of resources

Helping funders understand which programs produce the outcomes they desire, and at the same time, helping practitioners clarify the total cost of ownership of an outcome. I.e. What it really costs to produce results.

Lowering fundraising costs and reducing yield loss for funders and investors who all too often don’t get the results they want, which limits future investment.

Systematic
learning

Discovering gaps in the evidence base so that we can invest in research where it’s really needed, not just because someone wants to do another study.

Learning across evidence also helps us solve the elusive external validity problem so that we can begin to generalize results and create actionable information.

Advancing the
evidence-based movement

Today, many use the term evidence-based in a binary way — a program is either evidence-based or not. But that’s problematic for three reasons.

First, just because a program couldn’t afford to hire a fancy evaluator doesn’t mean that its model is not supported by evidence, or that it doesn’t work. Second, the reality is that most programs are to some degree evidence-based, meaning certain program components may be effective at producing certain outcomes for certain beneficiaries in certain contexts, but not others.

By defining programs in a binary way, this works, this doesn’t, and by analyzing programs as a black box rather than understanding their underlying structure and design, we are limiting the capabilities of using past evidence.

And finally, the current framework of evidence-based thinking can be stifling to innovation. Implying that a program can’t work because it hasn’t worked before, and limiting choices of funders or policymakers only to proven models is highly problematic.

We want to find ways to use knowledge and evidence to inform innovation, not stifle it.

Find out how Mission Measurement can help your organization, contact us today.

Chicago HQ

200 North LaSalle Street
Suite 2650
Chicago, IL 60601

Toronto

240 Richmond Street W
Toronto, ON M5V 1V6

General inquiries

mminfo@missionmeasurement.com