By Edward Beswick, Research Coordinator, Generations For Peace Institute

You are at a training, it is the end of a long day of an even longer week, you are tired, you have just sat through a two-hour long session full of theory, practice, and group work; increasingly your thoughts turn to dinner and your plans for the evening; the day seems to be finally over. But then, you are told you must evaluate the session you just sat through, and someone reminds you, once again, how important your feedback is… sound familiar? We have all been there, no matter what training we are attending – an evaluation of different training sessions always seems to ask us to engage just when we are ready to stop and move on to other things.

At Generations For Peace (GFP), we carry out Trainings quite often. This includes local Trainings, that take place in the country where volunteers are from, and international Trainings, where we bring volunteers to Amman or other destinations. International Trainings, like the Samsung Advanced Training 2016 (AT16), held in October, provide GFP with a great chance to find out if the sessions we are delivering to our volunteers actually work well. Is the content useful? Was it delivered clearly? Are participants learning? Knowing what works and what does not work helps us to adapt our Trainings. Our aim is to be able to design and implement successful trainings by learning from past experiences based on the feedback we receive from those who participate in them.

So with this in mind (both the importance of doing evaluations and the fact that they can be a little tiring) the Generations For Peace Institute (GFPI) decided to spice things up a little at the AT16 by using different methods of evaluating individual sessions. For our international Trainings, we do a holistic evaluation: we carry out Learning Needs Assessments before a Training, we collect session-based feedback during a Training, and we complete Post-Training Surveys right after a Training finishes. Staff members also participate in After Action Reviews once the Training finishes. Finally, we ask participants to complete Impact Surveys up to six months after each Training. This blog post, however, focuses only on session-based feedback – the way we collect feedback during a Training. It zooms in on the methods we used and what lessons we learnt at AT16. Session feedback is underexplored in the evaluation field and this post aims, in some small way, to address that gap; the idea is to offer useful approaches for our volunteers – and, hopefully, other practitioners in the development field – to use to assess individual training sessions.

GFP volunteer at AT16

Generations For Peace volunteer at the Samsung Advanced Training 2016

This post evaluates four methods of session feedback: firstly, the method GFP used at previous international Trainings (from 2013 to 2015) and then the three methods we trialed at the most recent Advanced Training, AT16. Previously, we used a system called the ‘Traffic Lights.’ Participants would rate each session for content (what the session was about) and delivery (how that content was communicated to them) using coloured cards: red = not useful, yellow = useful, and green = very useful. They would then place these cards in two boxes, one labelled Content and the other labelled Delivery. They could also add in any written feedback. So, what were the strengths and weaknesses of this method?


. It is simple to understand and relatively intuitive, building on people’s prior associations with    the colours of traffic lights;
. It uses visual cues rather than just words;
. And, materials can be reused at different sessions and Trainings.


. It is labour intensive (evaluators must stand with boxes making sure everyone evaluates after each session; they also need to collect, count, and redistribute the coloured cards and boxes once each session is complete).
. It involves lots of specific materials that take time to make (correctly-coloured cards and boxes);
. And, it is not entirely anonymous, as the evaluators and other participants can see people in the act of placing coloured cards in boxes.

Summary: this is a simple, accessible, and interactive method; however, it needs specific material, it is labour intensive for the evaluators, and it is not entirely anonymous, which may discourage honesty among participants.

Building on our experience using this method, we felt it was time to try something new at AT16.

Using an adapted scale with three choices (‘Room for improvement’, ‘Average’, and ‘Good’, for both the content and the delivery of the sessions), at AT16 we tried three different methods: the ‘Show of Hands,’ the ‘Daily List,’ and the ‘Feedback Table.’

Firstly, the ‘Show of Hands.’ For this, at the end of a particular Training session, an evaluator reads out the scaled answer choices for both content and delivery and people raise their hands when the stated option reflects their opinions. We had the facilitators leave the Training room and asked participants to sit on the floor in a circle facing outwards with their eyes closed. One evaluation facilitator would read out the options and the other would count the raised hands. Afterwards, people were handed paper cards, if they wished to comment. Here’s what we found out about the strengths and weaknesses of the method:


. It is quick and it is easy to understand;
. It is interactive and engaging;
. And, it is almost anonymous (if participants close their eyes!), as only the evaluators will see people’s answers.


. People change their minds a lot (raising and lowering their hands), which makes it hard to count, especially with larger groups;
. The evaluation has to be carried out by people who did not facilitate the session, which means it requires more human resources than other methods;
. And, it is subject to people’s willingness to preserve anonymity (people need to close their eyes and it is very tempting to open them!).

Summary: This is a quick and easy way to evaluate, but it is important people keep their eyes closed. Counting can be difficult with indecisive people, especially in larger groups.

The second method we tried is the ‘Daily List.’ For this method, each participant is given a sheet of paper. The paper contains a table with rows and columns; the rows have the names of each session that will be held that day, along with boxes to tick to give their opinion on the session’s content and delivery. The boxes correspond to the scaled answer choices mentioned above. The table includes space to provide any qualitative comments about each session. Participants are asked to keep hold of this list and are reminded to fill it out after each session, over the course of the day. They hand the list in at the end of each Training day.


. It is anonymous (we actively discouraged people from writing their names on the list);
. Their feedback is with them all day so that they are able to compare their ratings of different sessions;
. And, it involves very little work from the facilitators, just a reminder after each session.


. People need to hold on to their papers (and it is optimistic to think everyone will!);
. It is not interactive or engaging, it is very independent;
. And, people forget to make notes after each session, and it is difficult to ensure that everyone evaluates.

Summary: This is a simple, self-contained evaluation method that involves little input from facilitators, but it is not engaging and depends on participants both keeping hold of their sheets and filling them out consistently.  

The third and final method we used was the ‘Feedback Table.’ This involved big A3 sheets of paper being stuck to each table in the hall where sessions were held. Each paper contained a grid with a row for content and a row for delivery. Each of the answer choices (‘Room for improvement,’ ‘Average,’ and ‘Good’) were included as separate columns. After each session, participants were asked to place sticky pieces of paper in the part of the grid that reflected their opinion. They could also write comments on the paper if they wanted to. At the end of the day the feedback would be consolidated and put up on a wall for all to see. We found that the strengths and weaknesses of this method were as follows:


. People can see their comments and feedback once it is compiled at the end of the day;
. It is interactive and transparent;
. And, each of the big pieces of paper are stuck to the tables and can be reused for each session.


. It becomes messy, with lots of sticky pieces of paper on one sheet (and often tables are full of other items);
. It is time consuming for the evaluators to count the sticky papers after every session, compile the feedback, and present it at the end of the day;
. And, as comments will be shared with everyone, people may not feel comfortable writing their honest opinions.

Summary: This method stands out for its transparency, with people being able to see what others say, but it is time consuming for those doing the evaluation, and unless space is put aside for the ‘Feedback Tables’ it becomes messy.

Generations For Peace volunteers at the Samsung Advanced Training 2016

So, what can we conclude from all of this? We found that all four methods have their strengths and weaknesses, and which to pick depends on a few key factors:

. The group of participants you are working with;
. The space/materials you have available;
. The nature of the session (are participants sat down or moving around the room, does it take place in one venue, or multiple venues?);
. The number of people you have available to help administer the evaluation;
. Any language barriers that might exist (you may want to think about which methods are easier to communicate and understand across languages);
. And, consider how safe and comfortable people will feel being honest with their feedback.

It is good to be inventive and, if appropriate, vary your methods (remember how tiresome evaluations can be after a long day of training). This keeps people engaged and the evaluation novel, even if explaining new methods takes time. But, it is important that the results are comparable across the different methods. If you are using different ways of collection session-based feedback within the same Training, stick to the same scale so that you can compare feedback from each session. Keeping all that in mind, feel free to adapt the methods above – and of course, be creative!


Sign up to our e-newsletter to learn more about the impact of our programmes in the Middle East, Africa, Asia and Europe.