Growing and watering plants: “Assessment for learning” complex skills with video-enhanced rubrics

By July 1, 2016English

Looking at the difference between formative and  summative assessment the Ministry of Education in New Zealand uses a striking ‘garden’ analogy, with children setting the stage as ‘plants’. In this analogy, Summative assessment  is described as the process of measuring and comparing performances with the aim to reach a decision or a judgement, e. g. passing an exam or selecting suitable candidates for an educational program. Although these processes are interesting themselves, they do not affect the growth of the plant. In the contrary, formative assessment is compared with growing and watering the plants appropriate to their needs, thus directly affecting their growth. In the latter case, the information generated by a comparison of a persons’ current performance with some representation of a  goal state (e.g. a golden standard, modelling examples, role models) is used to further guide the teaching and learning process. According to Hattie & Timperley’s model (2007) this information ideally helps a learner to answer three questions:  1)Where am I going? (feed up), 2) How am I going? (feedback) and 3) Where to next? (feedforward), thus supporting a very effective learning process. Research showed that formative assessment had a greater effect on learners’ performances than class size or teacher professionalization, against lower costs (Black & William, 1998; William & Thompson, 2007). Additionally, progress of learners in groups with formative assessment (by means of short or medium feedback cycli) was about double in a year compared to other groups, next to increased commitment of learners (William, Lee, Harrison & Black, 2004). Thus, ‘watering plants’ by means of formative assessment is very effective when learners’ personal growth is at stake. But what ‘waterer’ (instrument) could be used to support learning processes in this way ?

In the Viewbrics-project (NOW/NRO funded),  a team of researchers, teachers and students combine two instruments designed to support the learning process towards the mastery of complex skills by means of offering a standard for comparison, namely ‘rubrics’ and ‘video-modeling examples’.  A rubric describes performance levels of complex skills by means of a set of performance indicators for sub-skills of this skill (Sluijsmans, Joosten-ten Brinke & van der Vleuten, 2013) and may be used to compare a person’s performance against the set indicators in various assessment settings (e.g. self, 360-degree, peer or summative assessment). A video modeling example demonstrates (aspects of)  a skill  or procedure to learners through a human(-like) model (Van Gog & Rummel, 2010) by means of a video.  (Good and bad) examples of exercising a skill in context can be shown in a video-modelling example, thus enabling learners to learn from observing others. This Dutch video shows a pitch by prof. Tamara Van Gog explaining why learning from video-modelling examples works from a cognitive psychology and motivational perspective.

Although used and  studied separately, the effect of combining both in one (formative assessment) instrument was, as far as we know, not yet researched.  The coming period we are developing the rubrics with video-modelling examples for three skills (presenting, collaborating and information literacy skills), validate them with various stakeholders and fine-tune them, leading to a ‘proven’ version. In the last year of the project we will be looking at the effect of the’ video-enhanced rubrics’ on feedback quality and final mastery of a complex skill by learners in several secondary schools.  So, more in the coming months!



Black, P., & Wiliam, D. (1998).  Assessment and classroom learning. Assessment in Education, 5 (1), 7–74.

Hattie, J., & Timperley, H. (2007). The Power of Feedback. Review of Educational Research, 77 (1), 81-112. doi: 10.3102/003465430298487

Sluijsmans, D. M. A., Joosten-ten Brinke, D., & Van der Vleuten, C. P. M. (2013). Toetsen met leerwaarde. Een reviewstudie naar effectieve kenmerken van formatief toetsen. Den Haag: NWO.

Van Gog, T., & Rummel, N. (2010). Example-Based Learning: Integrating Cognitive and Social-Cognitive Research Perspectives. Educational Psychology Review, 22(2), 155-174. doi: 10.1007/s10648-010-9134-7

Westera, W. (2011). On the Changing Nature of Learning Context: Anticipating the Virtual Extensions of the World. Educational Technology & Society, 14 (2), 201–212.

William, D., Lee, C., Harrison, C., and Black, P. (2004). Teachers developing assessment for learning: Impact on student achievement. Assessment in Education 11(1), 49-65.

Wiliam, D., & Thompson, M.(2007). Integrating assessment with instruction: What will it take to make it work? In: Dwyer, C.A. (ed.) The future of assessment: Shaping teaching and learning, pp. 53–82. Mahwah, NJ: Erlbaum (2007)

Author Ellen Rusman

I am an assistant professor and researcher at the Welten Institute of the Open University (OU) of the Netherlands. As an educational designer I have participated in (inter)national (European) projects since 1998 (e.g. Cooper, E-Len, CEFcult, LTfLL, Elena), but was also involved in many OU-internal innovation projects (e.g. LMS-ID specification, virtual classroom project). I am a fellow of the ICO and SIKS research schools and an expert in the EMPOWER network. Research-wise I am interested in supporting collaborative learn and work experiences and in facilitating the acquisition and (formative) assessment of complex skills.

More posts by Ellen Rusman