The standard of advocacy LSC was the only body trying to devise a scheme as part of their “priority to ensure that it is securing value for money whilst maintaining a high for its clients”. Ownership by the professions, though “has always been the preferred outcome” and the LSC, who will become a “Senior User” in the JAG process, approves of a single body taking responsibility for the operation of the final scheme which will apply equally to all advocates. The LSC’s hopes are that practise rights will be limited where competence is not achieved. The final scheme developed by the JAG should be “used as the contractual basis upon which advocates wanting to practise in publicly funded defence work demonstrate their competence”.

The LSC discussion paper rejects the argument that really weak advocates can be dealt with by a combination of professional rules and “market forces” (the assumption that solicitors will only brief the best). Despite everything, “there remain advocates at all levels who appear in cases (from the simplest to the most complex) that are beyond their competence.” The paper cites anecdotal evidence that advocacy standards are declining across the board, and the results of a MORI poll carried out by the Bar Standards Board (“BSB”) which found that roughly half of barristers believed that the current system is ineffective at dealing with barristers who are not up to the standard, or who are incompetent or who are unethical. “What is required is a recognisable standard that gives consumers and procurers assurance that an advocate at the requisite level, and with the appropriate competence, is instructed on a case by case basis”. 

The criteria

The paper sets out 15 minimum requirements including:

  • Simple to apply and outcomes are available to consumers
  • Competency-based, objectively measureable and complete
  • Independent and consistent assessments with sufficient assessors of the requisite calibre to develop and conduct the assessments. At the higher levels, practitioners would wish for assessments to be conducted by very senior practising advocates or judges. Those undergoing assessment must have confidence in the process of grading
  • The assessment process should be proportionate and the minimum necessary adequately to assess the overall competence of an advocate
  • The scheme should cover all criminal advocacy funded by legal aid but also in due course family and civil advocacy which is publicly funded.

How to do it

The JAG will have to find some method of measuring quality which is accepted as valid by the advocates, by those who “consume” advocacy, and by the “Senior Users”. Broadly speaking, the choice is between watching an advocate on his or her feet in court and a simulated assessment. One can of course use both, as Cardiff did. The former constituted the judicial evaluations (“JE”), the latter was conducted at a series of assessment days in accordance with case studies which were carefully devised and the text of which can be found in the annexes to the discussion paper. The CPS recently opted for something in between, namely assessors observing advocates doing trial and non-trial matters in court.

There are advantages and disadvantages no matter what system is chosen. One keeps an eye here on the LSC’s recommendation that it should be “competency-based, objectively measurable and complete.” The assessors were able to conclude whether candidates passed or failed at the level on which they were being assessed but also whether they were performing at a higher level, as several did on [the penultimate] level 3 in the cross-examination. This is an obvious advantage to the barrister whose abilities exceed the level of work which (s)he is currently receiving. Assessors can meet and discuss their approaches and their results can be moderated, so that only one assessment session is required. Judges may be less likely to meet and moderate—being judges, this is a matter for them. The discussion paper recommended at least three JEs where JE is used. Simulated advocacy can be devised any way one likes. On the one hand, a candidate may complain “Why am I doing a case study about this; I specialise in another area of crime”? However, on the other hand, in an assessment one overcomes the problem that a real judge will not know how long the advocate has been given to prepare this brief and cannot know what the defendant’s instructions are. Real trials test barristers actually on the job but the discussion paper recognised that one could face the problem of a disgruntled and convicted defendant asking under the Freedom of Information Act 2000 to see how the judge evaluated his counsel during the trial.

There are other issues involving the use of multiple choice tests (“MCT”) and portfolios of work, both of which were used in the pilot project. The disadvantage of MCT is that they provide only a limited indication of how well the candidate can apply and manipulate that knowledge. Portfolios, although allowing candidates to present themselves in their best light did also give them a chance to reflect on and relate to their own practice. The recommendation at the end of the pilot was that candidates on each level should produce a portfolio and perform a cross-examination. Levels 1 and 2 should undergo a multiple choice test and level 4 should have at least three pieces of JE.

How much will it cost?

If simulated advocacy is used, someone will have to pay for the cost of the venue, the assessors, the actors, the expert witnesses and the like. The LSC estimates candidate fees here at Level 1, £450-500; Level 2, £450-500; Level 3, £575-625; and Level 4, £600-650. Whatever the actual cost, it is difficult to see who would pay for this apart from the candidates themselves.

If you build it, will they come?

There were 227 candidates who put themselves forward for assessment but only 98 took part in the simulated advocacy. Twenty three live assessment days were arranged at various centres of which three were cancelled due to no take up. A number of letters of invitation were not responded to. Twenty of the original candidates who were not assessed were invited three times, 27 had two unaccepted invitations and 21 withdrew, though many more withdrew by not replying or by declining the invitation. The attrition rate for level 3 was highest. The submission of portfolios was also less than hoped. The discussion paper accepts that this was a pilot and that barristers might well have prioritised preparing for real trials but it is another indication of the practical problems in conducting assessments for some 5,000 criminal advocates.

What do the results tell us?

Relatively few level 1 candidates failed their level 1 assessments. It is thought that they showed some confidence in taking part at all and by aiming at the lowest level they were likely to achieve success. Only 51 per cent of those who took the cross-examination in level 2 passed; the main reason being a lack of thoroughness in selecting the issues that were ripe for cross-examination. They failed properly to address the case theory and to test a number of assumptions made by the witness. Those on level 3 did well in cross-examination: 13 sat this, 10 passed, four of whom reached level 4 in ability. Those on level 4 who failed the portfolio did so primarily due to misapprehension as to the detail required.

While emphasising that the numbers did not reach statistical levels of significance, the report does set out the results in terms of professional groupings. In level 1, solicitors did best apart from “must-know” MCTs. In level 2, barristers had the lowest failure rate in witness handling and MCTs.

Deciding on the right standards for advocacy for everyone may have been a challenge for the JAG, but the really hard work now begins: how to assess and monitor, how to organise and how to pay for a quality assurance scheme for several thousand criminal advocates.