*/
Following the difficulties experienced by Cardiff Law School’s “Quality Assurance for Advocates” pilot programme what are the options available for future monitoring schemes, asks David Wurtzel.
One of the aspects of being self-employed is that there is no one to appraise or quality assure you. The prospect that this might change for the Bar arose three and a half years ago when Lord Carter recommended, “A proportionate system of quality monitoring based on the principles of peer review and a rounded appraisal and should be developed for all advocates working in the criminal, civil and family courts”—and in the first instance, for publicly funded criminal advocates. This was sometimes referred to as the “Carter trade-off”: practitioners would receive more money and in return would institute quality assurance (“QA”). The money was indeed forthcoming but QA was not. Ironically, just when the government is likely to renege on most of the rise in fees, criminal court advocates finally do face the development of a QA programme. The details are very far from decided.
Action so far includes the formation of a Joint Advocacy Group, backed by the Legal Services Board (“LSB”), and made up of the three legal regulators: the Bar Standards Board (“BSB”), the Solicitors Regulation Authority (“SRA”) and ILEX. In December 2009 they issued a joint consultation paper for a set of proposed standards for criminal advocacy, which they identified as the starting point in the significant task of ensuring that standards “and their associated competences are applicable in a wide range of contexts”. Crime is “where there has been greatest interest in the consistency of advocacy competence and performance”. Responses should be sent to the BSB by 22 March.
The Cardiff research project
There has also been a pilot project—also referred to as a research project—which was commissioned by the Legal Services Commission (“LSC”), who engaged the Centre for Professional Legal Studies at Cardiff University Law School. The Cardiff “team” consisted of academics and an advisory panel including criminal practitioners, a retired Circuit judge and a chief Crown prosecutor. The final report has not been met with obvious approval.
Those of us who attended the meeting on 27 February 2009 at Inner London Crown Court—which was intended to generate volunteers for the pilot—will remember it for the distinctly unenthusiastic atmosphere. It was organised by the South Eastern Circuit and addressed by the then Chairman of the Bar who quoted a Circuit leader’s admonition, “you’ve got to be in it to win it”. There was one questioner (who by the looks of him was born after I was Called) asking, “why can’t we go back to the old days?” A defence-only advocate was concerned that he might get marked down in “judicial evaluation” (“JE”) even though he was only following the instructions of a difficult client, and which legal professional privilege prevented him from explaining to the judge. Some of the local judges were hostile to taking part, feeling that JE was invidious to their relationship with counsel, despite being described as “the consumers of advocacy” and thus best placed to feed back on the advocates who appear in front of them. What was forgotten was that until very recently, JE was an accepted and embedded part of the system. It was colloquially known then as “secret soundings” by the Lord Chancellor and judges were clearly willing to comment on the advocates in their courts, although their views could potentially undermine someone’s chances to become a QC or a Recorder.
Over time, the attitude of the Bar and Bench softened considerably but Cardiff encountered difficulties in producing a statistically significant sample. The intended pool of participants was projected at 280 “to test the validity, reliability and effectiveness of the assessment process” but only 98 advocates completed the assessments in time for the evaluation (see top table, p 12). Similarly, JEs were few. The problem may have been the decision that the individual assessment results were to be kept anonymous—neither the candidates nor anyone at the LSB or the respective regulators were told them. This meant that candidates were not allowed simply to hand a form to the judge at the start of at trial. Administrative alternatives proved to be too difficult. As a result, of the 148 advocates who were due to receive judicial feedback, only 22 got it, and only three candidates got more than one JE. The cohorts themselves included solicitors with higher rights of advocacy and Crown advocates (see bottom table, p 12). The whole thing was based on volunteers so there was no attempt at defining a representative sample.
Volunteers were divided into four levels:
Nearly two-thirds were assigned to the one level they laid claim to. The Cardiff pilot devised five competencies—analysis, organisation, interaction, presentation and leading cases. Within those were many specific matters, such as “gives lay and professional client clear advice” and “deals effectively with uncooperative witnesses”.
A quality “experiment”
With so few taking part, it is best to approach what took place as an experiment in how one can assess quality. It is fair to point out that in their recommendations, the Cardiff team endorsed the future use of the methods they employed and set out how this could happen. It is, of course, up to the regulators to decide whether or not to accept any or all of those recommendations.
Other than the use of JEs, Cardiff rejected the option of assessing candidates on their actual performances in court. This was thought to be uneconomic (assessors might arrive at court to find that the trial was ineffective) and also as being incapable “of comparison or verification for appeal”. In addition, there was no control over how challenging a candidate’s trial would be.
Instead, Cardiff used live, simulated exercises based on case studies. This overcame the problem of legal professional privilege set out above and it meant that everyone would be dealing with the same materials. Being a case study, anything could be put into it; on the other hand, the people who devised it needed to select a particular scenario, which meant that a candidate could complain “why am I being assessed on a type of case I never do?” Not everyone does every type of crime. Level 3 candidates were challenged by being asked to cross-examine for 40 minutes an expert witness; here a costs draftsman giving evidence in a prosecution against a solicitor for fraud by overcharging. Papers were supplied in advance and candidates were expected to show sufficient competence to request further relevant papers. On the day of the assessment, those who had not made such requests were given some but not all of the papers they would have received had they requested them—the skill to identify what they required was one of the matters assessed. 77 per cent of the level 3 candidates passed—much higher than the 51 per cent of level 2 who passed cross-examination and the 59 per cent who passed examination-in-chief—and 31 per cent scored so highly that they were deemed to be operating on level 4. For all the simulated exercises there were two practitioner assessors who met to reconcile their scores.
The more junior the candidate, the more exercises they had to do. Level 1 presented a portfolio of their cases and an anonymised piece of written advocacy and also performed cross-examination, an interview, a submission, and sat a multiple choice test (“MCT”). Level 2 produced the portfolio and written advocacy and performed examination-in-chief and cross-examination and sat the MCT. Levels 3 and 4 also produced a portfolio and written advocacy. Level 3 did the cross-examination; level four did no oral advocacy. The MCT was unpopular—candidates pointed out that they could always look things up in Archbold in real life. It tested not law but only “must know” points of procedure and evidence. Only 51 per cent passed it. Few at level 1 failed overall but 60 per cent failed at least one assessment. There has been criticism of the strong element of “paper assessments”—though presumably most barristers would want someone to have regard to their track record as practitioners.
The CPS project
It is worth noting here that the Crown Prosecution Service (“CPS”), who provided some volunteers for the Cardiff pilot, are in the midst of their own quality assurance project. It relies upon assessors observing advocates in a real trial. The assessors are in-house Crown advocates who in turn are quality assessed by a “double observation” by external assessors who are self-employed barristers, all of whom are Inns-accredited advocacy trainers and most of whom are very experienced in training young barristers. All assessors had to undergo consistency training, both to understand what was expected of them and what standards were required. Their assessments (a rating backed up by “headlines” flagging up the major points needing improvement, with examples) will form part of the advocate’s appraisal with their line manager who can recommend further training or development.
The choices ahead
How does the BSB evaluate quality? If they can bring the judges on board, that will provide an observation on how an advocate performs in his or her actual practice. The success of this depends on achieving several evaluations for each candidate. Should there be a further assessment by assessors in court, in effect double-checking the judges, or should one use the simulated advocacy option? If the latter is chosen, then which scenario should be used? Is it better to use a “real” expert witness or an actor simulating a child victim of sexual abuse?
The other great question is what one does with those found to be under-performing. Barristers have no line manager with whom to discuss training and development. What remedies should be available to those who need help? Few believe that barristers can be taught advocacy after new practitioner level. Apart from some specialised seminars there are almost no participative advocacy courses available where a practitioner gets on his or her feet and gets feedback. The regulators are rightly concerned about threshold competencies in the magistrates’ court and Crown Court and everyone would like to see some common standards agreed by all sides of the legal profession. It is obvious that witness handling and cross-examination in particular is the skill which is most valued but performed least well: the judges noted it in their JE, the HM CPS Inspectorate noted in “Report of the thematic review of the quality of prosecution advocacy and case presentation” (the CPS has already begun to address the problem), and the level 2 results above speak for themselves.
On the positive side, Cardiff held out the prospect for advocates who feel stuck at a particular level of work and who could demonstrate by the QA process that their abilities are greater than previously assumed. Quality assurance is nothing to be afraid of.
Action so far includes the formation of a Joint Advocacy Group, backed by the Legal Services Board (“LSB”), and made up of the three legal regulators: the Bar Standards Board (“BSB”), the Solicitors Regulation Authority (“SRA”) and ILEX. In December 2009 they issued a joint consultation paper for a set of proposed standards for criminal advocacy, which they identified as the starting point in the significant task of ensuring that standards “and their associated competences are applicable in a wide range of contexts”. Crime is “where there has been greatest interest in the consistency of advocacy competence and performance”. Responses should be sent to the BSB by 22 March.
The Cardiff research project
There has also been a pilot project—also referred to as a research project—which was commissioned by the Legal Services Commission (“LSC”), who engaged the Centre for Professional Legal Studies at Cardiff University Law School. The Cardiff “team” consisted of academics and an advisory panel including criminal practitioners, a retired Circuit judge and a chief Crown prosecutor. The final report has not been met with obvious approval.
Those of us who attended the meeting on 27 February 2009 at Inner London Crown Court—which was intended to generate volunteers for the pilot—will remember it for the distinctly unenthusiastic atmosphere. It was organised by the South Eastern Circuit and addressed by the then Chairman of the Bar who quoted a Circuit leader’s admonition, “you’ve got to be in it to win it”. There was one questioner (who by the looks of him was born after I was Called) asking, “why can’t we go back to the old days?” A defence-only advocate was concerned that he might get marked down in “judicial evaluation” (“JE”) even though he was only following the instructions of a difficult client, and which legal professional privilege prevented him from explaining to the judge. Some of the local judges were hostile to taking part, feeling that JE was invidious to their relationship with counsel, despite being described as “the consumers of advocacy” and thus best placed to feed back on the advocates who appear in front of them. What was forgotten was that until very recently, JE was an accepted and embedded part of the system. It was colloquially known then as “secret soundings” by the Lord Chancellor and judges were clearly willing to comment on the advocates in their courts, although their views could potentially undermine someone’s chances to become a QC or a Recorder.
Over time, the attitude of the Bar and Bench softened considerably but Cardiff encountered difficulties in producing a statistically significant sample. The intended pool of participants was projected at 280 “to test the validity, reliability and effectiveness of the assessment process” but only 98 advocates completed the assessments in time for the evaluation (see top table, p 12). Similarly, JEs were few. The problem may have been the decision that the individual assessment results were to be kept anonymous—neither the candidates nor anyone at the LSB or the respective regulators were told them. This meant that candidates were not allowed simply to hand a form to the judge at the start of at trial. Administrative alternatives proved to be too difficult. As a result, of the 148 advocates who were due to receive judicial feedback, only 22 got it, and only three candidates got more than one JE. The cohorts themselves included solicitors with higher rights of advocacy and Crown advocates (see bottom table, p 12). The whole thing was based on volunteers so there was no attempt at defining a representative sample.
Volunteers were divided into four levels:
Nearly two-thirds were assigned to the one level they laid claim to. The Cardiff pilot devised five competencies—analysis, organisation, interaction, presentation and leading cases. Within those were many specific matters, such as “gives lay and professional client clear advice” and “deals effectively with uncooperative witnesses”.
A quality “experiment”
With so few taking part, it is best to approach what took place as an experiment in how one can assess quality. It is fair to point out that in their recommendations, the Cardiff team endorsed the future use of the methods they employed and set out how this could happen. It is, of course, up to the regulators to decide whether or not to accept any or all of those recommendations.
Other than the use of JEs, Cardiff rejected the option of assessing candidates on their actual performances in court. This was thought to be uneconomic (assessors might arrive at court to find that the trial was ineffective) and also as being incapable “of comparison or verification for appeal”. In addition, there was no control over how challenging a candidate’s trial would be.
Instead, Cardiff used live, simulated exercises based on case studies. This overcame the problem of legal professional privilege set out above and it meant that everyone would be dealing with the same materials. Being a case study, anything could be put into it; on the other hand, the people who devised it needed to select a particular scenario, which meant that a candidate could complain “why am I being assessed on a type of case I never do?” Not everyone does every type of crime. Level 3 candidates were challenged by being asked to cross-examine for 40 minutes an expert witness; here a costs draftsman giving evidence in a prosecution against a solicitor for fraud by overcharging. Papers were supplied in advance and candidates were expected to show sufficient competence to request further relevant papers. On the day of the assessment, those who had not made such requests were given some but not all of the papers they would have received had they requested them—the skill to identify what they required was one of the matters assessed. 77 per cent of the level 3 candidates passed—much higher than the 51 per cent of level 2 who passed cross-examination and the 59 per cent who passed examination-in-chief—and 31 per cent scored so highly that they were deemed to be operating on level 4. For all the simulated exercises there were two practitioner assessors who met to reconcile their scores.
The more junior the candidate, the more exercises they had to do. Level 1 presented a portfolio of their cases and an anonymised piece of written advocacy and also performed cross-examination, an interview, a submission, and sat a multiple choice test (“MCT”). Level 2 produced the portfolio and written advocacy and performed examination-in-chief and cross-examination and sat the MCT. Levels 3 and 4 also produced a portfolio and written advocacy. Level 3 did the cross-examination; level four did no oral advocacy. The MCT was unpopular—candidates pointed out that they could always look things up in Archbold in real life. It tested not law but only “must know” points of procedure and evidence. Only 51 per cent passed it. Few at level 1 failed overall but 60 per cent failed at least one assessment. There has been criticism of the strong element of “paper assessments”—though presumably most barristers would want someone to have regard to their track record as practitioners.
The CPS project
It is worth noting here that the Crown Prosecution Service (“CPS”), who provided some volunteers for the Cardiff pilot, are in the midst of their own quality assurance project. It relies upon assessors observing advocates in a real trial. The assessors are in-house Crown advocates who in turn are quality assessed by a “double observation” by external assessors who are self-employed barristers, all of whom are Inns-accredited advocacy trainers and most of whom are very experienced in training young barristers. All assessors had to undergo consistency training, both to understand what was expected of them and what standards were required. Their assessments (a rating backed up by “headlines” flagging up the major points needing improvement, with examples) will form part of the advocate’s appraisal with their line manager who can recommend further training or development.
The choices ahead
How does the BSB evaluate quality? If they can bring the judges on board, that will provide an observation on how an advocate performs in his or her actual practice. The success of this depends on achieving several evaluations for each candidate. Should there be a further assessment by assessors in court, in effect double-checking the judges, or should one use the simulated advocacy option? If the latter is chosen, then which scenario should be used? Is it better to use a “real” expert witness or an actor simulating a child victim of sexual abuse?
The other great question is what one does with those found to be under-performing. Barristers have no line manager with whom to discuss training and development. What remedies should be available to those who need help? Few believe that barristers can be taught advocacy after new practitioner level. Apart from some specialised seminars there are almost no participative advocacy courses available where a practitioner gets on his or her feet and gets feedback. The regulators are rightly concerned about threshold competencies in the magistrates’ court and Crown Court and everyone would like to see some common standards agreed by all sides of the legal profession. It is obvious that witness handling and cross-examination in particular is the skill which is most valued but performed least well: the judges noted it in their JE, the HM CPS Inspectorate noted in “Report of the thematic review of the quality of prosecution advocacy and case presentation” (the CPS has already begun to address the problem), and the level 2 results above speak for themselves.
On the positive side, Cardiff held out the prospect for advocates who feel stuck at a particular level of work and who could demonstrate by the QA process that their abilities are greater than previously assumed. Quality assurance is nothing to be afraid of.
Following the difficulties experienced by Cardiff Law School’s “Quality Assurance for Advocates” pilot programme what are the options available for future monitoring schemes, asks David Wurtzel.
One of the aspects of being self-employed is that there is no one to appraise or quality assure you. The prospect that this might change for the Bar arose three and a half years ago when Lord Carter recommended, “A proportionate system of quality monitoring based on the principles of peer review and a rounded appraisal and should be developed for all advocates working in the criminal, civil and family courts”—and in the first instance, for publicly funded criminal advocates. This was sometimes referred to as the “Carter trade-off”: practitioners would receive more money and in return would institute quality assurance (“QA”). The money was indeed forthcoming but QA was not. Ironically, just when the government is likely to renege on most of the rise in fees, criminal court advocates finally do face the development of a QA programme. The details are very far from decided.
The beginning of the legal year offers the opportunity for a renewed commitment to justice and the rule of law both at home and abroad
By Louise Crush of Westgate Wealth Management sets out the key steps to your dream property
A centre of excellence for youth justice, the Youth Justice Legal Centre provides specialist training, an advice line and a membership programme
By Kem Kemal of Henry Dannell
By Ashley Friday of AlphaBiolabs
Providing bespoke mortgage and protection solutions for barristers
Joanna Hardy-Susskind speaks to those walking away from the criminal Bar
Tom Cosgrove KC looks at the government’s radical planning reform and the opportunities and challenges ahead for practitioners
From a traumatic formative education to exceptional criminal silk – Laurie-Anne Power KC talks about her path to the Bar, pursuit of equality and speaking out against discrimination (not just during Black History Month)
Yasmin Ilhan explains the Law Commission’s proposals for a quicker, easier and more effective contempt of court regime
James Onalaja concludes his two-part opinion series