top of page

Are We Measuring Impact that Matters? A New Conversation About Impact for Non-Profits and Their Partners

[Measuring What Matter series]

Most non-profits in Asia know how to count. They count participants, workshops, loans disbursed, and children reached. What is harder — and rarer — is measuring whether any of it is actually changing something. This series is an invitation to corporate partners, CSR teams, non-profit leaders, and board members to think about that gap together, and do something about it.


Maya has been running the same program for three years. Her organisation trains women in digital financial literacy across three provinces in the Philippines — a real need, a thoughtful curriculum, a dedicated team. Every December she produces a donor report. It shows the numbers clearly: 847 women trained, 23 workshops delivered, an attendance rate of 94%.


Her CSR contact at the corporate sponsor reads the report, marks the grant as successful, and schedules the next funding conversation. Both parties feel good about it. And then, in a team debrief in January, someone asks a quiet question: “are those women actually doing anything differently with their finances six months later?”


The room goes still. Nobody knows. It was not something anyone measured.


Maya’s situation is not unique. The tension it reveals — between measuring activity and understanding change — is one of the most consistently documented challenges in the non-profit sector globally. Research from Bridgespan, the Stanford Social Innovation Review, and the Harvard Business School’s work on non-profit effectiveness all point to the same structural pattern: organisations become skilled at producing activity reports, and funding relationships — often without anyone intending this — reward that skill more than they reward genuine insight into whether change is happening. There is good reason to think this dynamic is at least as pronounced in Asia, where measurement infrastructure is more constrained and the non-profit sector is younger in many markets.


This post is the beginning of a conversation about why that pattern exists, what it costs, and what a different approach might look like — one that is practical for resource-constrained organisations, and honest about the tensions involved. We are not presenting a finished framework. We are asking whether this is a problem you recognise, and whether you would like to work through it with us.


The question is not whether your organisation is measuring. Almost certainly it is. The question is whether what you are measuring is connected to the change you are actually trying to create.


The Activity Trap: Why We Count What We Can Count

There is a structural reason that non-profits measure activities rather than outcomes, and it has very little to do with competence or intention. It is about incentives.


A well-documented structural reason sits behind this pattern. A 2012 analysis in the Stanford Social Innovation Review found that ‘most grants are simply too small and short-term’ for organisations to build serious impact measurement capacity, and that this short-termism actively undermines the ability of grantees to demonstrate what they have actually changed. Corporate CSR grants, foundation grants, and government-funded programs across Asia commonly follow annual cycles: a grant is made, a report is due, a renewal decision follows. Within that window, an organisation can credibly show that it ran workshops, enrolled participants, and delivered services. What it cannot credibly show, within twelve months, is whether it changed the trajectory of a woman’s economic life — because that kind of change takes years to show up, and requires follow-up and measurement infrastructure that annual grant cycles rarely budget for.


So organisations measure what the cycle rewards. They count inputs (money spent, staff hours) and outputs (sessions delivered, people reached). They learn to present these numbers fluently and confidently. And because everyone in the room — funder and grantee alike — has agreed to evaluate the relationship using these measures, the activity report becomes not just a funding requirement but the shared language of impact.

The problem is that activity and impact are not the same thing, and over time the gap between them becomes costly in ways that are hard to see from inside a twelve-month reporting cycle.


What The Activity Trap Actually Costs

For non-profits: you get very good at doing things and less certain about whether those things are working. Resources continue flowing to programs that may have stopped producing meaningful change. Teams lose the habit of asking uncomfortable questions about effectiveness because the funder is not asking them either.


For CSR departments and corporate funders: reporting may look strong on paper, but demonstrating to boards and stakeholders what an investment actually changed is increasingly expected. ESG reporting requirements are tightening across Singapore, Hong Kong, and Japan — and investor and regulatory pressure is accelerating in other markets. Activity-count approaches are increasingly insufficient when stakeholders are asking for evidence of social outcomes, not just activities funded.


For the communities being served: programs that are not designed to measure whether they work are also less likely to improve when they are not working. The women in Maya’s financial literacy program may be leaving workshops without lasting behaviour change — and without a measurement system, no one will know until far too late to adjust.


We want to be honest about the gray area here. Measuring outcomes rigorously is genuinely difficult and genuinely expensive. Not every non-profit has the capacity to run longitudinal studies or collect complex behavioural data. There is a real tension between what evaluation science recommends and what a five-person organisation running programs in three provinces can actually do.


We are not arguing for perfect measurement. We are arguing for better measurement — and specifically for a shift in what questions get asked first, before a program is designed, rather than after a report is due.


Borrowing from the Boardroom: What OKRs and KPIs Can Offer (If We Translate Them Properly)

If you work in a corporate CSR department or sit on a non-profit board with a business background, you may already be thinking in frameworks like Objectives and Key Results (OKRs) or Key Performance Indicators (KPIs). OKR adoption has grown rapidly in corporate settings globally — Asia Pacific is currently the fastest-growing region for OKR concepts and its software adoption, with particular uptake in technology, finance, and multinational teams across Singapore, India, and Southeast Asia. These tools exist precisely to solve a problem the non-profit sector also faces: how do you know whether you are moving toward the thing that actually matters, rather than just doing a lot of things?


The challenge is that these tools were designed for contexts where the outcomes are financial, the measurement cycles are quarterly, and the definition of success is relatively unambiguous. Social change is harder to define, slower to show up, and often contested in ways that quarterly revenue figures are not. Transplanting OKRs directly from a tech company to a community development organisation without translation tends to produce either useful rigour or demoralising bureaucracy — and it is not always obvious in advance which one you will get.


But the underlying logic of these frameworks is genuinely useful, and worth understanding clearly before deciding how to adapt it.


What the framework logic actually is

An OKR, at its simplest, is a structured way of answering two questions: ‘What do we most want to achieve?’ (the Objective) and ‘How will we know whether we are getting there?’ (the Key Results). The second question is the important one. It forces an organisation to name, in advance, the specific signals that would tell them change is happening — not just that they have been active.


Applied to social sector work, this reframe is powerful. Instead of a program objective that reads ‘deliver financial literacy training to 800 women,’ an OKR-style framing asks: what would actually be different in those women’s financial lives if the training worked? If the answer is ‘they would be saving consistently, reducing high-interest debt, and making more financial decisions independently,’ then those become the Key Results — and measuring them, however imperfectly, is what the program is actually for.


What most non-profits currently track

What outcome-focused measurement tracks instead

Number of participants enrolled

Proportion of participants who demonstrate changed behaviour 3–6 months post-program

Number of workshops delivered

Whether program content matches the specific barriers participants face in their context

Attendance and completion rates

Whether completers are applying skills — and who is not completing, and why

Funds disbursed or loans made

Whether recipients have more financial control and decision-making power than before

Testimonials and case stories

Patterns across participants that reveal what is working, for whom, and under what conditions

The shift is not about discarding what you currently measure. Attendance rates still matter. Participant numbers still matter for planning and resource allocation. The shift is about adding a second tier of questions — ones that connect the activity to the change — and making that tier the one that drives program decisions, not just the one that appears in the appendix of a donor report.


The quarterly review cadence that corporate teams practice is also directly transferable, and perhaps the single most underused governance tool in the non-profit sector across Asia. We will come back to that in a moment.


The OKR logic, stripped of its corporate vocabulary, asks one question that every non-profit and every funder should be able to answer: if this program works, what specifically will be different for the people it serves? That answer is the measurement target. Everything else is logistics.


The Governance Gap: Why Collecting Data Is Not the Same as Learning From It

The James Irvine Foundation documented this precisely after a multi-year effort to help California non-profits build performance measurement systems. The project’s main finding, published in its evaluation, was striking: ‘Establishing these systems alone was not good enough. In the end, the project’s success had less to do with whether measurement systems were developed and more to do with whether the organisations were able to create a culture that valued the process of self-evaluation.’ In other words, the data was often already there. What was missing was the practice of examining it honestly together.


This is likely familiar territory for many organisations across Asia.


Most organisations, if asked, can point to data they already collect: attendance records, feedback forms, beneficiary profiles, financial tracking, quarterly narrative updates to funders. Some of it is quite detailed. What tends to be much rarer is a regular, structured moment when that data is actually examined together, discussed honestly, and used to make decisions. Data accumulates in filing systems, spreadsheets, and annual reports. It rarely surfaces in conversations where it could actually change something.


This is the governance gap. And based on the sector research, it matters more than the measurement methodology.


In a well-functioning corporate context, performance data feeds into a review cadence — weekly check-ins, monthly reviews, quarterly business reviews — that create the discipline of asking: are we on track, and if not, why not? The review is not primarily a reporting exercise. It is a decision-making exercise. Someone looks at what the data says and decides whether to stay the course, adjust the approach, or stop and redirect resources.


BoardSource’s longitudinal research on non-profit governance in the US consistently finds that strategic oversight — specifically asking whether the work is actually working — is among the areas boards rate themselves lowest on. Compliance, financial review, and program updates dominate meeting time; the harder question of program effectiveness rarely gets the agenda space it deserves. While we cannot point to equivalent published data specific to Asian non-profit boards yet, there is no obvious reason to expect a different pattern, and considerable reason to expect the pressure toward compliance-focused agendas is stronger in contexts where regulatory and funder accountability demands are high.


What a Learning-Oriented Review Could Look Like

Consider a small organisation running skills training programs in three provinces. Rather than waiting for the annual donor report, it holds a 90-minute ‘learning meeting’ each quarter — separate from its board meeting, attended by program staff, one board member, and one rotating external stakeholder. The agenda is fixed at three questions:


  1. What were we trying to achieve this quarter, and what did we actually observe?

  2. What surprised us — positively or negatively — and what does that suggest?

  3. What is the one thing we would do differently next quarter?


The data used is whatever already exists: attendance patterns, facilitator notes, community feedback, financial variance. No new measurement system is required.


This kind of structured reflection is what the evaluation literature calls ‘double-loop learning’ — questioning not just whether activities were delivered, but whether the assumptions behind the program hold. Researchers at the James Irvine Foundation found it was precisely this practice — not the measurement system — that made the difference: organisations that created a culture of honest self-evaluation improved; those that only installed measurement tools often did not. It costs almost nothing beyond the discipline of making the time.


For board members reading this: the most valuable thing you can do is not to ask for more data. It is to ask for the conversation about what the existing data means. That question — asked consistently, in the right spirit — is what shifts an organisation from record-keeping to learning.


For CSR managers reading this: the same principle applies to your grant relationships. If your reporting requirements ask for numbers and case studies but not for an honest account of what is not working and why, you are inadvertently selecting for organisations that are good at presenting rather than organisations that are good at learning. The funder has more power to change this norm than most non-profits do.


Where to Start: Three Questions That Redirect the Work

We want to be practical here. Transforming how an organisation thinks about impact measurement is not a single-quarter initiative. But there are specific, bounded starting points that tend to shift things — for non-profits and for their partners — without requiring a complete overhaul of systems or a consultant-led process.


We offer these as questions rather than instructions, because the right answer depends on your organisation’s context, your team’s capacity, and the relationships you have with your funders. We would genuinely like to hear how they land in your situation.


Question 1: If this program works, what would be different for the people it serves?


This sounds simple. It is harder than it looks. It requires naming a specific, observable change — not a vague aspiration. ‘Women will be more financially empowered’ is an aspiration. ‘Women will be making more financial decisions independently within their households, and will be able to articulate why’ is a change you can look for evidence of.


Try this with your existing program portfolio. For each major program, write one sentence that answers this question specifically. If you cannot write that sentence, the program’s theory of change needs to be revisited before the measurement conversation can be productive.


Question 2: Who is not showing up, and why?


Dropout and non-participation data is the most underused information most non-profits hold. Who enrolled but stopped attending? Who was invited but did not come? Who completed the program but is not demonstrating the changes it was designed to produce?

The Stanford Social Innovation Review’s work on performance measurement traps documents this pattern: dropout and non-uptake in non-profit programs consistently reflect design failures rather than beneficiary failures. Schedules that conflict with caregiving hours, application processes too administratively complex for lower-literacy participants, program content that assumes infrastructure participants do not have — these are recurring design problems that are only fixable once they are asked about. Dropout data is the most underused signal most organisations already hold.


Question 3: What would have to be true for this program to stop?


This is the governance question, and it is the one that most non-profit boards and most CSR review processes never explicitly ask. Under what conditions would we conclude that this program is not working and should be redesigned or stopped? What evidence would tell us that?


The value of this question is not that you will stop most programs — you probably will not. It is that naming the stopping conditions in advance creates an honest frame for the data you collect. If there are no conditions under which a program would be stopped or significantly changed, then the measurement process is not really evaluation. It is affirmation.


For CSR Teams: Four Shifts That Change the Funder-Grantee Dynamic

  • Ask for learning reports, not just results reports. Require grantees to document what did not go as expected and what they adjusted. This normalises honest reflection and gives you much more useful information.


  • Extend time horizons where possible. Even extending a grant from 12 to 18 months creates space for a meaningful mid-point review and adjustment cycle. Multi-year grants, even at modest levels, allow organisations to design for outcomes rather than outputs.


  • Fund the measurement infrastructure. If you want organisations to track behaviour change rather than attendance, build the cost of doing that into the grant. It is a legitimate program cost, not overhead.


  • Ask your grantees the three questions above at the start of the relationship, not at the end. The answers will shape what you both measure and what you both learn.


What We Are Not Saying — and Where We Are Still Working This Out

We want to close the substantive section of this post with some honest acknowledgment of what is genuinely complicated about this conversation.


Outcome measurement done badly can be as distorting as no outcome measurement at all. A narrow focus on measurable outcomes can squeeze out programs that address the most intractable problems — the ones where change is slow, multidimensional, and resistant to simple metrics. Advocacy work, norm change, and leadership development are examples of interventions that are genuinely hard to measure in the short term but critically important for long-term social change in Asia. We are not suggesting that only measurable things matter.


There is also a real power dynamic in the funder-grantee relationship that complicates all of this. Research on performance measurement in the non-profit sector has noted that pressure to measure comes ‘primarily from external sources’ — funders — rather than from an internal culture of learning. Asking a non-profit to be honest about what is not working requires that the funder will genuinely not penalise that honesty. In many funding relationships — particularly where grantees are smaller, newer, or operating in contexts where funding alternatives are limited — that trust has yet to be built. The structural power imbalance can make candid reporting feel risky even when funders sincerely welcome it.


Building that trust takes time, consistent funder behaviour, and explicit signals that learning from failure is valued. This is not a measurement problem. It is a relationship problem.

We are also aware that the OKR and KPI language we have borrowed in this post carries cultural associations that may not land well in every context. In some organisational cultures across Asia, structured performance frameworks feel imported and hierarchical rather than enabling. The underlying logic is sound; the implementation needs to be sensitive to context. We are curious about what adaptations have worked in your settings.


An Invitation to Think About This Together

Maya’s question — are those women actually doing anything differently? — is not a difficult question to ask. What makes it hard is asking it in an environment where the incentive structure rewards you for not asking it.


Changing that environment is a shared project. Non-profits need to build the internal culture and capacity to ask harder questions about their own effectiveness. Corporate CSR departments need to redesign their reporting relationships to reward honest learning rather than polished performance. Board members need to reshape the agenda of the rooms they sit in. Funders need to be willing to pay for the measurement infrastructure they say they want.


None of this happens through a single framework or a single conversation. But it starts with naming the gap honestly, which is what we have tried to do here.


We'd Like to Hear From You

If you are a non-profit leader or program team: where does the activity trap show up most clearly in your work? What makes it hard to ask the question ‘is this actually working?’ in your organisation? What has helped?


If you are in a CSR department or foundation: how do your current reporting requirements shape what grantees measure? Have you seen examples of grantee relationships where honest learning has been possible? What made that possible?


If you are a board member: when did your board last have a genuine conversation about whether the organisation’s programs are producing the changes they are designed for? What would it take to make that a regular part of your agenda?


We are sharing these questions publicly because we think the most useful thinking on this topic is distributed across the people doing the work — and we genuinely want to hear what you are experiencing and what you have learned.



About This Series: Measuring What Matters

This is the first post in a four-part series exploring how non-profits and their corporate partners in Asia can build better impact measurement practice — practical, proportionate, and honest about the real constraints organisations face. Each post will include frameworks you can use, questions worth asking in your own context, and examples from the region.



About Huse Infinity

Huse Infinity works with non-profit organisations, social enterprises, and corporate partners across Asia on strategy, governance, and capability building. We believe that better measurement practice is not a technical problem — it is a leadership and culture problem. And it is one that non-profits and their partners need to work through together.

Recent Posts

See All
bottom of page