A fit-for-purpose framework for embedded evaluation:


The January Featured story on O4C Facebook gives a detailed insight into how we continually reflect, adjust and refine our evaluation framework – to best capture important insights and learnings:

The first cycle of activity in the Open4Citizens project was completed in December 2016. Based on our design-anthropology-inspired evaluation framework, Antropologerne and our O4C project partners have gathered rich information about activities across pilot locations and have started extracting lessons from this.

We’ve been engaging with local stakeholders to support organisation of hackathons in all five pilot locations/cities of the project: Opendatalab CopenhagenOpendatalab MilanoOpendatalab RotterdamOpendatalab Barcelona and Open4Citizens – Karlstad. During these hackathons participants across Europe have used open data to develop appropriate solutions to specific challenges.

We are eagerly analysing our first year of activity and building lessons learned into our second project year (January to December 2017).

A Design-supported evaluation framework facilitates ’embedded evaluation’:

All evaluation data gathering and analysis is done by embedded evaluators, i.e. it is carried out internally in the project, with pilot team members being involved in planning, gathering data and evaluating hackathons and related activities. A potential drawback of this approach can be insufficient distance to the activities, leading to biased data gathering and analysis. To mitigate this challenge, we’ve designed a fit-for-purpose data gathering template which is used by all project pilots.

Antropologerne’s data gathering design prioritises photos of activities and graphical interpretation of findings. This facilitates alternative interpretations of activity by other pilot teams to those gathering the information, clearer communication of activities across pilots for analysis and learning, and is a complement to written material. Shared visual templates also facilitate communication with stakeholders beyond the project team.

Formative evaluation helps us to learn lessons across the project:

Initial analysis of our rich and extensive evaluation material allows for us to map out where we are achieving our aims and where we need to adjust our approach. So far, we learn how the the local context and involved stakeholders are important in shaping activities and ecosystems/OpenDataLabs.

Based on early learnings, we have developed design principles to improve innovation tools used in our hackathons. To better support citizens to work with open data, all tools need to be

  1. Data focused,
  2. Simple,
  3. Flexible and
  4. Add value to the solution creation process.

In addition, our open data project partner, Dataproces, has incorporated lessons from the use of open data and the OpenDataLab Platform in the hackathons to continue designing the online platform underpinning all pilots (see last month’s featured story for more).

Continuing to refine both formative & summative evaluation elements – Understanding how best to support citizens’ use of open data:

Across our five pilots, we will continue to test our hypotheses about how best to achieve the project vision by capturing and reflecting on ways in which we create social value. We are evaluating the value created through

  1. structured co-design activities (i.e. a second round of hackathons) supported by a specific set of tools to facilitate participation and an understanding of open data,
  2. the infrastructure needed to support the co-creation of open data-driven solutions to challenges in urban services and,
  3. the governance level set-up that will ensure the sustainability of the O4C approach.

If you’re keen to learn more about the O4C evaluation approach: Get in touch with Janice at Antropologerne (http://www.antropologerne.com/?lang=en, see ‘Team’)


Look out for next month’s  #Open4CitizensFeaturedStory for more about the O4C project!