Visualising Intervention – Pre & Post Assessment

How can we show that non-teacher intervention is working in school

Note: this article is based on working practice during 2005

Part of the Visualising Education Blog Series

It is clear that current and future developments within the education system are focused on using data to prove that change has taken place.  However, the question is what change or changes will support influence judgments about support staff. Personally, I believe that support staff should not be solely judged on the learners’ academic performance.  Support staff deliver a range of interventions aim at different areas of the learner’s holistic development e.g. behavioural, social and educational development (BSED).  Thus, the support worker performance or impact should be judged against the stated needs of the learner, creating three categories of performance or assessment which inferences three core skillsets wrapped in a clear understanding of pedagogical practice.

The biggest issue facing government, schools and support workers is how to show that the taxpayer is getting value for money from the pupil premium fund.  All too often this falls back on to the academic changes of the learners who have been supported by a support worker(s) which is funded by this pot of money. However, research from across the developed world shows that this is not a valid method of proving support workers impact.

There is a shift, back to the pre and post-baseline assessment of learners to show that intervention has made a difference to the individual child and young person.  The approach is a simple idea and can show progress in the learner’s development, which in turn can be used to argue that these developments have contributed to any academic improvements of the learner and those learners who share the learning environment, especially if the principal issue is behavioural or disruption.

Figure one shows the mean average of all intervention over one-half term (about six weeks) of 80 students using the Jane McSherry  Coping in School Survey. The graph has five sections and each section has five bars (columns) where the first three columns are indicator columns and the last two columns indicate the mean scores for pre-intervention and post-intervention.

Average Intervention Gains - Mean
Figure1) Average Intervention Gains – Mean – CISS Assessment

It can be seen that all sections or skillsets have achieved a mean increase after the half term intervention, except for the skillset ‘self and others.’ This was a consistent trend and only showed a marginal increase over the year. Even after focused changes to the intervention methods used to targeting this skillset. I would also like to draw your attention to the ‘self-management and behaviour’ skillset. The gains in this skillset were always consistently low.  Suggesting that intervention in these two skillsets needs to be specific and enduring which in turn needs to be reflected in any performance assessment of support workers.

This overarching data can inform areas like:

  • The deployment of support workers (based on the support worker skillsets) in relation to the learners specific needs
  • The training and CPD needs of the support staff to target weak areas
  • The overall impact or difference that the support team has had over that half term

WARNING: Finally, take a moment to consider which assessment you are using to show the impact of the support team.  There are many different assessments which test many different aspects of the learner and equally have different copyright conditions. Be very clear about what you want the assessment to show.  For example, if you want the assessment to show the learners attitude to school and learning then there is no point completing a self-esteem test.

Once you have selected an assessment you now need to consider the ethical and moral issues around implementing it. If you have chosen an assessment used by health professionals to support mental health assessment and your data shows that the learner is severely depressed, what will the school put in place to support that child to guard against liability or even worse, self-harm. Also, do you have the (ethical) right to implement the assessment without informed consent from the parent and/ or the child.

Part of the Visualising Education Blog Series

Visualising Intervention – Intervention Credits

How can we show that non-teacher intervention is working in school

Note: this article is based on working practice during 2005

Part of the Visualising Education Blog Series

As a team providing personalised intervention for more than eighty students across key stage three (age 11-13yrs) we faced the same problems as most researchers who explore this sector, quantitative evidence of student improvement.  Qualitatively, the intervention was making a difference which was observable within the learning experience.  However, this left the department (learning support) and staff, open to the usual comments regarding performance and ability.

Then my brother (Tim Campbell) showed me a kid’s pocket (cheque) book which had different activities on each page.  The idea was the child would request the activity and when the activity took place they handed over the page or cheque.  The epiphany, I quickly realised that this idea could be used to provide the quantitative evidence demonstrating that the intervention staff were making a difference to the children and young people across the school.

Figure 1) The mentoring cheque


How Did We Use Them

The intervention worker (mentor, learning support worker or teaching assistant) was responsible for making sure that the student had access to their (intervention) credits.  This was fairly easy as each target was printed in batches of five as we could get five cheques on one side of an A4 piece of paper.  This allowed for two chequebooks to be created on each print run.

The student received their personalised chequebook which contained about nine cheques or three cheques for each of the student’s intervention targets taken from their intervention plan (IBP, IEP, SEN Statement etc…).  This was a deliberate approach as it allowed the school/ department to compare students’ success against pre-defined targets, which introduced data transparency.

The number of cheques for a given target depended on how successful the student was at achieving them.  For example, a target which the student found hard to achieve would have three to five cheques but we left in cheques which the student could easily achieve to ensure the continuing feeling of success.

Given the nature of the caseload students (BESD or SEN Statemented) it was decided that it was the student responsibility to get the cheque signed not the teachers or teaching assistants.  To promote this idea and increase the engagement with the chequebook we introduced prizes.  It was quickly realised that the prizes had to be well defined as students would use any gaps to get more credits or more rewards.  To overcome this, a standardised list was created with the required credit points and times which the item could be accessed (break, dinner, after school).  The standardised list also included items like pens and pencils for only a few credits which supported well for students who had poor organisational skills.

Based on previous observations and experience it was discovered that systems which mirrored the behavioural system, led to very low engagement levels.   As a result of this, the team went to great lengths to separate this credit scheme from the behavioural system.  Presenting the credit scheme at the whole school staff training, teachers were directed not to request or remind the students about their chequebook and be honest and fair about the student performance during the lesson.  Most importantly, do not treat the credit scheme as a punishment.

The Bank (data collection)

When the credits were returned by the student their name, target, subject and date were entered in a spreadsheet which had formulas and functions allowing it to automatically update the student’s scores and the overall success graph.   This allowed the student to experience instant gratification.  At the end of each day a new vertical bar chart, listing all the targets and their frequency was printed and displayed on the wall immediately outside the intervention team office and the staffroom.  This had a major positive impact on the students’ motivation levels.  For some students they could see that they were having a positive experience within the school and that they were contributing to the school as a whole, others used it as a competition between friends.  For a few, it acted as a catalyst for a change in behaviour.

This approach to quantitative data gathering allowed the inclusion department to highlight how much impact the provided intervention was having within each department and across the school.  More importantly, this allowed the teaching staff to visualise who school intervention based on students targets.  Recording the data also allowed the intervention team to track and change targets when the student was continually successful.

How Successful Was It

In my considered opinion, this approach to tracking and accessing intervention success was the most efficient and effective approach that we tried over the five years which I was at the school.  To highlight this, of the eighty students on the intervention caseload we took 28 (35%) of them to the cinema which was worth 200 credits.  This means that each student had to achieve their target(s) 200 times or 6.6 targets per day, one each lesson plus a spare.  When you multiply this by 28 (students) the intervention had 5600 instances of success over that half term period.

Now read this post:  Visualising Intervention – Action Plan.

This blog is based on extracts from this book: A Practical Guide To Inclusion: A Manual For Implementation and Delivery


Part of the Visualising Education Blog Series