I came across a nice visual on a blog a while back which showed each pupils GCSE results ranked from highest to lowest grade, sorted from highest to lowest average GCSE grade per pupil and then with conditional formatting applied to give a colour gradation from bright green 9/A* to dark red 1/U (letter grades are converted to numbers to make this work). Once completed the effect looks like this:
I really liked both the completeness and simplicity of the visual in showing the distribution of exam grades for a year group. It shifts the conversation from individual department results, to our successes and development areas as a school. For example, perhaps we do really well at pushing the brightest students in getting straight 9-7s and A*/As, but there is a drop off for those pupils in the middle. We could easily question this and have it as a target going forward.
Taking this a step further we can compare one cohorts results with another. At a glance it is easy to see the changes in distributions without getting bogged down in percentages. Are there more greens than yellows, or more yellows than reds compared to last year is all we need to ask at the top level. On the report page of our PowerBI app, I have two of these charts side-by-side to make it easy to compare.
I managed to create this in Excel but it was a bit of a faff. It required lots of steps including using a macro I found that enabled an independent horizontal sort. You’d need to go through the same process each year and when remarks come in. Far too inefficient.
Instead, I have recreated the visual in PowerBI. In summary, it uses a matrix chart (otherwise known as a pivot table in Excel lingo) with pupils on each row, grade rank for the columns and the exam grade as the value. The student name comes from the student info table, the exam grade from the exam results table, and the grade rank is a calculated column in the exam results table.
RankX was used with the Earlier function to generate the grade rank calculated column. It is important to note that RankX had to iterate through the table and rank each student within an exam group. For example, if a pupil who has just left following the completion of IB courses in year 13 also did IGCSEs at our school, then they will have both IGCSE and IB grades in the same table. If we just use RankX for the student, then both the IGCSE and IB grades will be included in the same rank calculation. However, if we rank on the StudentID and Exam Type (IB/IGCSE) then we’ll get two different rank calculations for that student essentially separating the ranks for their IGCSE grades from their IB grades. To do this I created a custom column concatenating the StudentID and exam type fields but I’m sure with better DAX skills this step isn’t necessary.
Here is a break down of the steps taken to create the rank in the exam results table:
- Create a random number measure:
- Create a new column in the table that adds the result of the random number to the exam grade to make each exam grade unique.
- Create a RankID that combines the StudentID and exam type (IGCSE/IB)
- Create the grade rank measure:
1. RandomNumber = RANDBETWEEN(-20000,20000)/1000000 2. Exam grade (RN) = [Exam Grade (standardised)]+[RandomNumber] 4. Rank of exam grades per student = RankX ( FILTER ( 'Exam results', 'Exam results'[RankID] = EARLIER ( 'Exam results'[RankID] ) ), [Exam grade (RN)], , DESC, DENSE )
Once this is set-up, all you need to do is add the matrix chart and change the conditional formatting settings to suit.