Select Committee on Education and Employment Minutes of Evidence



MEMORANDUM FROM THE QUALITY ASSURANCE AGENCY FOR HIGHER EDUCATION (HE39)

INTRODUCTION

  1.  It is some 15 years since there was last a major Select Committee inquiry into higher education. The intervening period has seen a doubling in the number of higher education students. Higher education has changed fundamentally from a system catering for a relatively small elite to one based on mass participation. In this note the Agency reflects on the consequences of this fundamental change for assurance of quality and standards. Before turning to questions posed by the Committee about how quality should be assessed, it sets the context by considering why there is now a need for external assurance of quality and standards.

PART 1: WHY ASSURE QUALITY AND STANDARDS?

The Context

  2.  Academic standards are not a private matter. A substantial proportion of the population is now touched by higher education, as students, parents, employers and teachers. The transition of higher education from elite and exclusive, to mass and inclusive provision has transformed its relationship with the society that it serves. There are new stakeholders with expectations to be met and information needs to be satisfied: the greatly increased number of young people who are the first generation of their family to go to university, employers recruiting in the graduate labour market for the first time, and mature students looking to higher education to equip them with the skills to cope with uncertain and rapidly changing job prospects.

  3.  The public cares about academic standards. Employers, parents and young people committing three years of their lives to study need to have confidence that high standards are set by universities and colleges, and are achieved by their students. And all stakeholders wish to know how those standards relate to their needs for skilled staff, for successful careers, and for personal fulfilment.

  4.  In a small, elite university system, academic standards and values were implicit. Those who recruited graduates to blue-chip companies, to the professions and to public service were themselves graduates. Teachers in selective schools who advised their pupils where to study were a part of the same establishment. The value added by a higher education was well understood.

  5.  In an egalitarian, mass participation system, all that changes. Standards and values must be made explicit to those investing their time and money in study, and above all to those employers who will not know from personal experience of the value that higher education can add. Understanding of the benefits cannot be shared informally through a narrow social network, it must be widely available to all with an interest. As Lord Dearing put it in his report on higher education: "there is much to be gained by greater explicitness and clarity about standards and the levels of achievement required for different awards."

  6.  The transition to mass higher education is a global phenomenon. In both developed and developing countries higher education is expanding rapidly as governments identify high level technical and intellectual skills as being the key to success in knowledge based economies.

  7.  In most countries, universities find themselves subject to three pressures. First, there is the pressure to increase numbers of students. Second, governments find themselves unable to support financially a mass participation system at the rate per student that was affordable in a smaller, elite system. Third, universities are called upon to demonstrate that standards are being maintained and enhanced.

  8.  The response to the pressures is similar in most countries.

  9.  First, there is a greater emphasis on the student as an active, and to an extent autonomous, learner, rather than a passive recipient of teaching. New learning strategies, including distance learning and the use of electronic materials, are developing from this change of emphasis.

  10.  Second, there is substantially increased participation of private finance in higher education. In some countries this manifests itself as a growth of private colleges operating as profit making enterprises. (Many UK universities franchise programmes to such institutions, notably in countries such as Malaysia). In the United Kingdom private finance plays a part through the introduction of fees paid by students, and of public/private partnership approaches to some capital projects.

  11.  Third, many countries have established national organisations to provide an independent evaluation of quality and standards in higher education institutions. Initiatives to establish such bodies have come from governments, from within the higher education sector, or both. The international Network of Quality Assurance Agencies in Higher Education now has affiliates from 47 countries throughout the world.

The Quality Assurance Agency

  12.  The Quality Assurance Agency for Higher Education was established in 1997 to provide an integrated quality assurance service for higher education institutions throughout the United Kingdom. Its establishment was recommended by a Joint Planning Group that was set up, with the approval of Government, by the higher education funding councils and the representative bodies of the institutions of higher education.

  13.  Higher education had become subject to three different forms of external scrutiny of academic provision. First, the universities themselves had established the Academic Audit Unit (later incorporated into the Higher Education Quality Council) in the late 1980s to report on the overall management of quality and standards by universities. Second, the Further and Higher Education Act 1992, which dissolved the Council for National Academic Awards, placed upon the Funding Councils a statutory responsibility to assess the quality of the provision that they funded. Each Funding Council established a quality assessment division for accreditation of certain programmes by professional and statutory bodies for whom the programmes formed a part of the process leading to acquisition of a professional title. The Joint Planning Group recommended the establishment of a single Agency to integrate, as far as possible, these systems so as to achieve a greater efficiency and to minimise the burden of scrutiny on institutions.

  14.  In 1997 the National Committee of Inquiry into Higher Education, under the Chairmanship of Lord Dearing, made a number of specific recommendations about quality and standards. These have played a major part in setting the agenda of work for the Agency.

  15.  The Agency is an independent body, established as a company limited by guarantee and having charitable status. The members of the Company are the bodies representing higher education institutions, but the Board is structured so as to guarantee the independence of the Agency. Four members of the Board are nominated by the representative bodies, four are nominated by the Funding Councils, and six (of whom one must be the Chairman) are independent members appointed by the Board itself. The independent members are chosen so as to be broadly representative of employers of graduates. Two observers attend Board meetings, to represent the interests of students and of government education departments. The Agency inherited the staff and functions of the former Higher Education Quality Council, and of the Quality Assessment Division of the Higher Education Funding Council for England.

  16.  The Agency's main business is to review and report upon the performance of institutions of higher education in respect of quality and standards. Further details of this work are given below. In addition the Agency advises Government on the grant of degree awarding powers and university title, it manages the scheme for recognition of Access to Higher Education courses and it audits academic partnerships between UK institutions and overseas colleges that offer teaching leading to the degrees of the UK institutions.

  17.  The Agency has two main funding streams. One comprises subscriptions paid by all institutions of higher education in the United Kingdom. The other is income derived from contracts with Higher Education Funding Councils to carry out, on their behalf, reviews of provision at subject level to enable the Funding Councils to discharge their statutory responsibilities. In future, the Agency expects to contract also with the NHS Executive for the review of higher education programmes funded by the NHS.

The mission of the Agency

  18.  The Agency's mission is to promote public confidence that quality of provision and standards of awards in higher education are being safeguarded and enhanced.

  19.  The promotion of public confidence involves the provision of public information. To this end, all of the Agency's reports on institutions and their subject provision are published.

  20.  The provision of public information involves more than reporting on the performance on individual institutions. The key reference points against which the judgements in reports are made also be understandable and understood. The Agency works with the higher education sector in defining expectations about standards in an accessible manner.

  21.  For each academic discipline subject benchmark statements are being produced. These are statements that represent general expectations about standards for the award of qualifications at a given level in a particular subject area. Benchmarking is not about listing specific knowledge: that is a matter for institutions in designing individual programmes. It is about the conceptual framework that gives a discipline its coherence and identity; about the intellectual capability and understanding that should be developed through the study of the discipline to the level in question; the techniques and skills which are associated with developing understanding in the discipline; and the level of intellectual demand and challenge which is appropriate to study of the discipline to the level in question.

  22.  Benchmark statements identify the generic, or transferable skills developed by the study of each discipline. Study in any academic discipline develops high level intellectual skills. Students learn to analyse and interpret data, to formulate and test hypotheses, and to apply critical assessment and judgement. They apply these skills in circumstances where the subject matter is not only complex, but information may well be uncertain, ambiguous or incomplete. They learn to express themselves through the use of lucid, coherent and concise arguments. Application of these skills is not limited to the academic field in which they were first developed. They are transferable to many contexts, not least employment. They are the foundation of the problem solving and communication skills that employers are buying when they recruit graduates. A challenge for the academic community is to ensure that students develop awareness of how their skills may be used more widely, and that those skills are explained in terms that are meaningful to a non-specialist audience.

  23.  Benchmark statements provide a broad indication of the transferable skills developed through study in each discipline. These will be set out more specifically in the specifications of individual programmes of study.

  24.  The Agency is developing also the framework of higher education qualifications proposed in the Dearing Report. For review purposes this provides reference points to be used to determine whether the intended outcomes for programmes, and actual student achievement are appropriate to the levels of the qualification awarded. The framework helps provide public assurance that qualifications bearing similar titles represent similar levels of achievement.

Attitudes to External Quality Assurance

  25.  It would be naive to expect that external scrutiny of academic activities would be welcomed universally by those subject to it. Nevertheless, there is a general acceptance that it is a necessary process and that by identifying and disseminating good practice it plays a valuable role in enhancing the quality of provision.

  26.  There is a also recognition that higher education consumes a significant amount of public funds and that it is proper that there should be accountability for this. However, value for money, be it public or private, is not the only thing that drives quality assurance. It is about ensuring that standards are property set and achieved. That is why the Agency looks at the ways in which higher education institutions define the outcomes they expect of their graduates, whether teaching is designed to deliver those outcomes, and whether achievement of them is properly assessed. In that way public assurance is provided that standards are being maintained.

  27.  In general, institutions welcome the opportunity to demonstrate that they are achieving high standards, and that they are providing high quality learning opportunities to their students. Considerable pride is taken in good results achieved in the Agency's reviews, and such results are frequently used by institutions in their publicity.

  28.  Institutions, and individual departments, will often acknowledge that external scrutiny provides a helpful stimulus to reflection on performance and identification of improvements.

  29.  The new method of quality assurance that is now being introduced by the Agency has been welcomed for the emphasis that it places on points of reference determined through a process that involves the academic community. Overall, these are the subject benchmark statements referred to above, and within institutions, the programme specifications developed by course teams.

  30.  Nevertheless, there are some critical voices. Writing in the Independent on 13 January 2000 Alan Ryan, the Warden of New College Oxford said: "If the Committee of Vice Chancellors and Principals had any gumption, the QAA would be closed tomorrow."

  31.  It is significant that this criticism was not of the principle of external scrutiny. The article praised the external scrutiny provided previously by the former Council for National Academic Awards. This criticism was of bureaucracy, what was described as "truck loads of supporting bumph". That is a valid criticism. The systems that the Agency inherited from its predecessors, and which, for the time being, it has continued to operate, involve the assembly of large quantities of documentary evidence which is place in the "base room" used by a review team during the course of its visit.

  32.  That approach is driven by a review method that takes a snapshot of academic provision, or of overall academic management, over a short period of three or four days. Information has to be assembled artificially and for no other purpose and kept at hand in case it is required, Inevitably, much of it is not required thus giving rise to complaints of wasted effort.

  33.  The new method being introduced by the Agency avoids this problem. The time spent on review will no longer be concentrated into a single week, but spread intermittently over a longer period. Reviewers will deal with naturally arising rather than artificially assembled evidence. They will time their visits to an institution to coincide with internal events for which the institution has to assemble evidence for its own purposes (for example its own internal review of provision). At the heart of the process will be the institution's own self-evaluation of its provision. Reviewers will seek to test, and where possible verify, a self-evaluation. The Agency is confident that this approach will enable external scrutiny to operate with a lighter but nonetheless effective touch, and that it will reduce the burden that is now perceived to fall on academic departments whose work is subject to review.

  34.  There are other criticisms that are less valid. These may be couched in the language of concern about bureaucracy, but in reality they are objections to the principle of any external scrutiny. They will sometimes claim that the process fails to add value, whilst ignoring the very real enhancement benefits that flow from disseminating good practice, and promoting critical reflection on what academic programmes are seeking to achieve. Frequently, they assert that all would be well if only academics were left to get on with their work, and market mechanisms were allowed to be the sole regulatory mechanism.

  35.  The market is an imperfect mechanism for assuring quality and standards in higher education. Many potential students, and many potential employers will not have available to them the information that will allow them to make valid and well informed choices between different providers. An independent means of providing that information is needed. Most people make the choice of a degree course only once; and a mistake in the selection of the most appropriate institution may be both difficult and expensive to put right. Equally, many students, especially those who are mature, lifelong learners may not have the mobility of young school leaver. Such people need reliable information about what may be the only institution to which they can apply.

  36.  Students and employers have reasonable expectations that there will be available reliable, independently verified information about programmes of study. Within a global, knowledge based economy it is vital that there is credible assurance of a standing of the UK higher education brand.

  37.  The minority of academics who reject the principle of external quality assurance seem sometimes to be expressing a nostalgia for the departed days of a small and elite higher education system. In a small elite system the external examiner alone may have been a sufficient assurance of standards. In a large complex system, catering for mass participation, that is no longer the case. Just as systems of student financial support have had to adjust to reflect the realities of a mass participation system, so quality assurance has had to adapt.

  38.  The relationship between students and their teachers has altered as the student body has become larger, has grown to include more mature students and has become more representative of society at large. With all other professionals, university teachers are learning that deference to title or position has been replaced with respect for ability and quality of service. Public assurance of quality allows students and employers to make informed choices between institutions and enables academics to earn that respect. Explaining what standards mean, and confirming that they are achieved by students, does no more than meet the proper expectations of transparency and accountability that a modern democracy has of those who provide it with professional services.

PART 2: HOW QUALITY AND STANDARDS ARE ASSURED

  39.  The Agency has been developing and trialling a new quality assurance method. It will be used for the first time in Scotland in the academic year 2000-01, and throughout the United Kingdom from 2001-02. The following description of how quality and standards are assured is based upon that new method.

Defining quality

  40.  There are two dimensions to the quality of higher education. The first is the appropriateness of the standards set by the institution. The second is the effectiveness of teaching and learning support in providing opportunities for students to achieve those standards.

  41.  The new review method developed by the Agency seeks to assure quality by addressing three inter-dependent areas:

    —  reporting on programme outcome standards is concerned with the appropriateness of the intended learning outcomes set by the institution (in relation to relevant subject benchmark statements, qualification levels and the overall aims of the provision), the effectiveness of curricular content and assessment arrangements (in relation to the intended learning outcomes), and the achievements of students;

    —  reporting on the quality of learning opportunities in a subject is concerned with the effectiveness of teaching and of the learning opportunities provided; on the effectiveness of the use of learning resources (including human resources); and on the effectiveness of the academic support provided to students to enable them to progress within the programme;

    —  reporting on institutional management of standards and quality is concerned with the robustness and security of institutional systems relating to the awarding function. This involves, in particular, arrangements for dealing with approval and review of programmes, the management of credit and qualification arrangements and the management of assessment procedures.

Assessing quality

  42.  There is a proper expectation that any system of external quality assurance will be as efficient as possible, will consume no more overall resource than is necessary, and will evolve from one of universal intensity to one in which intervention is in inverse proportion to success.

  43.  To this end, the method used by the Agency will:

    —  provide transparency of process through the use of qualifications frameworks, subject benchmark statements, programme specifications and a Code of Practice addressing good practice in academic management;

    —  involve exchange of information between review of individual subjects and review of whole institutions, thereby reducing duplication to a minimum;

    —  allow institutions to negotiate the timing and aggregation of subject reviews. This enables external review to be aligned with internal review, or with accreditation by professional or statutory bodies, should an institution so wish;

    —  facilitate alignment of subject review with internal processes by spreading reviews over a period rather than imposing a "snapshot" style review visit. Thus evidence from internal processes can be made available to reviewers, so that the need for the preparation and assembly of large amounts of documentation in advance of a visit is removed;

    —  ensure that the amount of time taken to conduct a subject level review is the minimum necessary to enable reliable judgements to be made.

  44.  Self-evaluation is central to, and is the starting point for the process of review. It encourages the institution to evaluate the quality of the learning opportunities offered to students, the standards achieved by them, and the effectiveness of arrangements to manage quality and standards. It provides an opportunity for the institution to reflect on "what do we do?", "why we do it" and "why do we do it in the way that we do?"

  45.  The self-evaluation document provides a framework for a process of review based on the testing and verification of statements made by the institution. The document should reflect on and evaluate both strengths and weaknesses, indicate the changes that have taken place since earlier external reviews, and consider what it may be necessary to change in the future.

  46.  The reviewers who assess quality and standards are the peers of those whose work is under review. Subject level review is carried out by teams of subject specialists, drawn mainly from the higher education sector. In some subject areas, where there are specific occupational pathways followed by significant numbers of students, reviewers come also from industry, commerce and the professions. At the level of the whole institution, reviewers are persons holding senior posts, such as Pro Vice Chancellor or Dean.

  47.  Peer review enables judgements to be made by those who understand the subject, the teaching and learning processes, or the academic management systems under scrutiny. It enables judgements to be credible to, and to command the respect of the academic community. It acts as a means of disseminating good practice. However, for a peer review process to have credibility with external stakeholders, such as employers and potential students, judgements must be made in a transparent manner and reported publicly; and the process itself must be seen to be accountable to a Board having a demonstrably independent membership.

Judgements on standards

  48.  In each institution, and for each subject area, the Agency will make a single, threshold judgement about academic standards. Having regard to all of the matters listed below, reviewers will decide whether they have confidence in the academic standards of the provision under review. A "confidence" judgement will be made if reviewers are satisfied both with current standards, and with the prospect of those standards being maintained into the future. If standards are acceptable, but there is doubt about the ability of the institution to maintain them into the future, reviewers will make a judgement of "limited confidence". If, in relation to any of the matters listed below, reviewers feel that standards are not being achieved, then their overall judgement will be that they do not have confidence in the academic standards of the provision under review.

  49.  Reviewers will assess, for each programme, whether there are clear learning outcomes which appropriately reflect applicable subject benchmark statements and the level of the award. Subject benchmark statements represent general expectations about standards in an academic discipline, particularly in relation to intellectual demand and challenge. The qualifications framework sets expectations for awards at a given level more generally. Reference points are thereby provided to assist reviewers in determining whether provision is meeting the standards expected by the academic community generally, for awards of a particular type and level. If the intended learning outcomes were found not to match those expectations, it is unlikely that reviewers could have confidence in the standards of the provision. An example of potential failure would be if a postgraduate programme had learning outcomes that were set at undergraduate level only.

  50.  Making consistent judgements about the appropriateness of the intended outcomes of academic programmes does not mean that reviewers will look for a dull uniformity rather than intellectual curiosity. Differing institutional aims within a plural sector will promote diversity. The Code of Practice will have a section on programme approval that will facilitate the design of innovative and inter-disciplinary provision.

  51.  Reviewers will assess whether the content and design of the curriculum are effective in achieving the intended programme outcomes. It is the curriculum that ensures that students are able to meet the intended outcomes of the programme. Providers should be able to demonstrate how each outcome is supported by the curriculum. "Curriculum" for this purpose includes both the content necessary to develop understanding and the acquisition of knowledge, and the opportunities to develop practical skills and abilities where these are stated as intended outcomes. If significant learning outcomes were found to be unsupported by the curriculum, it is unlikely that reviewers could have confidence in the standards of the provision.

  52.  Reviewers will assess whether the curriculum content is appropriate to each stage of the programme, and to the level of the award. Providers should be able to demonstrate how the design of the curriculum secures academic and intellectual progression by imposing increasing demands on the learner, over time, in terms of the acquisition of knowledge and skills, the capacity for conceptualisation, and increasing autonomy in learning.

  53.  Reviewers will assess whether assessment is designed appropriately to measure achievement of the intended outcomes. Providers should be able to demonstrate that achievement of intended outcomes is assessed, and that, in each case, the assessment method selected is appropriate to the nature of the intended outcome. There must also be confidence in the security and integrity of the assessment process, with appropriate involvement of external examiners. An assessment strategy should also have a formative function, providing students with prompt feedback, and assisting them in the development of their intellectual skills. There should be clear and appropriate criteria for different classes of performance, which have been communicated effectively to students. If significant learning outcomes appear not to be assessed, or if there are serious doubts about the integrity of the assessment procedures, it is unlikely that reviewers could have confidence in the standards of the provision.

  54.  Reviewers will assess whether student achievement matches the intended outcomes and level of the award. Reviewers will consider external examiners reports from the three years prior to the review, and will themselves sample student work.

  55.  Where a review covers a number of subjects, separate judgements on standards will be made in respect of each subject. Where programmes are offered at more than one level, separate judgements will be made in respect of each level, if there are significant differences between them. In all cases, reports will contain a narrative commentary on strengths and weaknesses in relation to each aspect of the standards judgement.

Judgements on the quality of learning opportunities

  56.  In each institution, and for each subject area, the Agency's judgements about the quality of the learning opportunities offered to students will be made against the broad aims of the provision and the intended learning outcomes of the programmes.

  57.  Reviewers will assess the effectiveness of teaching and learning, in relation to curriculum content and programme aims. They will consider large and small group teaching, practical sessions, directed individual learning, the integration of skills within curricula, and distance learning. Reviewers will evaluate the breadth, depth, pace and challenge of teaching; whether there is suitable variety of teaching methods; and the effectiveness of the teaching of subject knowledge; and of subject specific, transferable and practical skills.

  58.  Reviewers will evaluate student progression by considering recruitment, academic support, and progression within the programme. They will assess whether there is appropriate matching of the abilities of students recruited to the demands of programmes; and whether there are appropriate arrangements for induction and the identification of any special learning needs. They will assess the effectiveness of academic support to individuals, including tutorial arrangements and feedback on progress. They will consider general progression within programmes, and wastage rates.

  59.  In making judgements about learning resources, reviewers will consider how effectively these are utilised in support of the intended learning outcomes of the programmes under review. Consideration will be given to the use of equipment (including IT), accommodation (including laboratories) and the library (including electronic resources). Reviewers will look for a strategic approach to the linkage of resources to programme objectives. Effective utilisation of academic, technical and administrative staff will be considered, as will the matching of the qualifications, experience and expertise of teaching staff to the requirements of the programmes.

  60.  Reporting on the quality of learning opportunities will place each of the three aspects of provision in to one of three categories, failing, approved or commendable, and will be made on the following basis:

    —  provision is failing because it makes a less than adequate contribution to the achievement of the intended outcomes. Significant improvement is required urgently if the provision is to become at least adequate. In the summary report, this judgement will be referred to as "failing";

    —  provision enables the intended outcomes to be achieved, but improvement is needed to overcome weaknesses. In the summary report, this judgement will be referred to as "approved". The summary will normally include a statement containing the phrase "approved, but . . .", which will set out the areas where improvement is needed;

    —  provision contributes substantially to the achievement of the intended outcomes, with most elements demonstrating good practice. In the summary report, this judgement will be referred to as "commendable".

  61.  Within the "commendable" category, reviewers will identify any specific features of the aspect of provision that are exemplary. To be deemed "exemplary" a feature must:

    —  represent sector-leading best practice; and

    —  be worthy of dissemination to, and emulation by, other providers of comparable programmes; and

    —  make a significant contribution to the success of the provision being assessed. Incidental or marginal features do not qualify for designation.

  62.  The characteristics of exemplary features will, by their nature, vary between institutions and programmes. The criteria listed above will ensure that features identified as "exemplary" will be broadly comparable in weight and significance.

  63.  If provision is found to be failing in any aspect of quality, or if reviewers have no confidence in the standards achieved, the provision will be regarded, overall, as failing. It follows that all provision that is not failing is approved. The report of the review will state whether or not provision is approved.

Judgements on institutional management of quality and standards

  64.  Review by the Agency at the level of the whole institution is concerned particularly with the exercise by an institution of its powers as a body able to grant degrees and other awards. It results in reports on the degree of confidence that may reasonably be placed in an institution's effectiveness in managing the academic standards of its awards and the quality of its programmes.

  65.  Review will address the robustness and security of the systems supporting an institution's awarding function. In most cases, these will relate to the exercise of the institution's own powers. Where an institution does not have direct awarding powers, the review will consider the exercise of any powers delegated under a validation or other collaborative agreement. Review will be concerned with:

    —  Procedures for approval, monitoring and review of academic programmes.

    —  Procedures for acting on the findings of external examiners, subject reviews, and other external scrutinies.

    —  The overall management of assessment processes.

    —  The overall management of any credit systems.

    —  The management of collaborative arrangements with other institutions.

  66.  If an institution has extensive partnerships, for example with further education colleges or overseas colleges, there may be a separate review of such collaborative activity to establish the extent to which an institution:

    —  is assuring the quality of programmes offered by a partner organisation for the institution's own awards; and

    —  is ensuring that the academic standards of its awards gained through study in partner organisations are the same as those applied within the institution itself.

  67.  Reports on whole institutions will be concerned with the effectiveness of an institution's systems for managing the quality of its provision, the standards of its awards and the security of its awarding function. The report will identify both good practice and matters where the Agency believes that improvement action should be taken. Action points will be categorised as essential, advisable or desirable on the following basis:

    —  Essential—matters which are currently putting academic standards and/or quality at risk, and which require urgent corrective action.

    —  Advisable— matters which have the potential to put academic standards and/or quality at risk, and which require either preventive, or less urgent corrective action.

    —  Desirable—matters which have the potential to enhance quality and/or further secure academic standards.

  68.  Reports will conclude with a statement of the degree of confidence that the Agency considers may reasonably be placed in the continuing effectiveness of the institution's quality assurance arrangements.

  69.  A statement that confidence could not be placed in institutional quality assurance arrangements should be a rare occurrence. Such a statement would be likely to result from a number of matters requiring "essential" action, the combined effect of which was to render ineffective the quality assurance arrangements as a whole.

  70.  A statement that limited confidence could be placed in institutional quality assurance systems would normally be made if there was one, or a small number of matters requiring "essential" action, and it was clear that the failings could readily be put right. Such a statement might result also if there were no "essential" action points, but a large number of matters where action was "advisable". The judgement would depend on the number, nature and weight of the "advisable" action points.

  71.  In all other cases a statement will be made that broad confidence can be placed in institutional quality assurance systems. Use of the term "broad confidence" ensures that an institution is not placed in a lower category on account of minor weaknesses only. The narrative of the report will discuss strengths and weaknesses, and may identify also exemplary features of the arrangements.

PART 3: RESULTS FROM THE EXISTING METHOD OF QUALITY ASSURANCE

  72.  All of the review activities of the Agency result in published reports. The published results, and especially the numerical graded profiles from the existing method of subject review, are used by journalists to produce league tables. There are limits to the validity of comparisons made on this basis, as reporting is primarily against the objectives set by each institution for its own provision, and is not against a universal standard. Appended is a note published by the Agency in June 1999 on interpretation of the numerical graded profiles.

  73.  Many institutions have secured high scores in all aspects of the graded profile, leading to some claims that institutions have "learned to play the game". In this case the "game" is about improving the quality of the learning opportunities available to students, and ensuring the maintenance of standards. If universities and colleges are getting better at that game by taking seriously their responsibilities for quality and standards, then that is to be welcomed. It is also confirmation that the enhancement role of quality assurance is alive and well, as institutions develop mechanisms to promulgate good practice internally.

  74.  Despite the generally good results of subject review, there remain areas of weakness in a small minority of provision. The more serious weaknesses concern standards. There are failures of curriculum design, where the content or level fails to match intended outcomes of programmes. There are some weaknesses in assessment, characterised by a failure to ensure that assessment adequately measures achievement of intended learning outcomes. The new quality assurance method, with its emphasis on standards, will give a sharp focus to these issues, and should provide a major stimulus to improvement where that is needed.

  75.  Regrettably, these failings are found disproportionately in higher education programmes delivered through further education colleges. In some cases this must give rise to a question of whether the college has the capacity to deliver such programmes. The question of institutional capacity to deliver higher education programmes successfully will need to be considered in any strategy for expanding the role of the further education sector in this area.

  76.  This is not to say that the further education sector has no role to play in the delivery of higher education. Further education colleges can provide an important gateway to higher education for many who do not have ready access to a university. And the best further education colleges do very well indeed: one of the few institutions to secure the highest marks in every aspect of review in art and design was from the further education sector. The models for successful delivery of higher education in further education colleges are there; those whose performance is now disappointing must take urgent steps to emulate them.



QUALITY ASSURANCE AGENCY FOR HIGHER EDUCATION

GRADED PROFILES: INTERPRETING THE NUMBERS

  The Agency carries out assessments of higher education provision in England and Northern Ireland on a subject by subject basis. The programme of subject review, commenced originally by the Higher Education Funding Council for England, will cover all subjects taught in higher education institutions in a cycle lasting from 1993 to 2001. After this cycle is completed, subject review is due to be replaced by a new quality assurance method covering all provision throughout the United Kingdom.

  The results of these reviews, known also as Teaching Quality Assessments, are published and each assessment is summarised by a "graded profile", sometimes referred to as a "TQA score". Published TQA scores are used by some national newspapers in constructing "league tables" of higher education institutions. Profiles are not designed for translation into league tables, so those wishing to use them in this way should be aware of their limitations.

  This note is intended to assist those wishing to make use of the graded profiles, either as a guide to the quality of an individual programme, or in making comparative judgements between programmes and the institutions that provide them. The note does not deal with the results of Scottish and Welsh subject reviews, as these are expressed largely descriptively, rather than numerically.

  Anyone wishing to build up a complete picture of the quality of teaching and learning in a higher education institution should be aware that TQA reports are not the only source of information. The Agency carries out audits, which result in published reports on the overall academic management of institutions, and on collaborative links with partner organisations overseas Institutions offering programmes of initial teacher education are reported upon by OFSTED on behalf of the Teacher Training Agency. Many programmes are accredited by professional bodies, which in some cases publish reports.

What is TQA for?

  Subject review is carried out for three main reasons:

To meet a statutory requirement.

  The Funding Council is obliged, by the Further and Higher Education Act 1992, to secure that provision is made for assessing the quality of education provided in institutions for whose activities they provide financial support. This enables the Funding Council to ensure that public money is not wasted on unsatisfactory provision.

To provide public information.

  Information about individual programmes is helpful to potential students, and those who advise them, when applying to enter higher education. Information about programmes is also to helpful to employers who recruit graduates, and to professional bodies who recognise some higher education qualifications that are relevant to their field of activity.

To help institutions enhance the quality of their provision.

  An independent evaluation of the strengths and weaknesses of programmes assists institutions in learning from good practice and addressing points of relative weakness.

What does TQA measure?

  TQA does not make judgements against a single standard that is of universal application in each subject. The aims and objectives of programmes having the same or similar subject titles, but offered in different institutions, will vary. Programmes will reflect the particular research interests of individual institutions, and some may have more explicitly vocational aims than others.

  In common with most qualifications degrees represent a range of attainment, not a single absolute level. Whilst a broad comparability of degree standards is maintained through the use of external examiners, within the range of achievement that may be represented by a degree there can be some variation in demand between programmes. In the future, the comparability of standards now achieved through the external examiner system will be reinforced by the publication by the Agency of subject benchmark information.

  The graded profile relates to the aims and objectives set by the subject provider, and so should be read in conjunction with those aims and objectives. Scores do not tell the reader how a programme is performing in relation to an external standard, they tell the reader how well the institution is doing in terms of meeting the objectives it has set for itself. Within the legitimate range of achievement represented by a degree, it will be harder to achieve high scores in the profile against very demanding objectives than it would be against less demanding objectives.

  When TQA scores are used in league tables to make comparisons between different institutions, it is important to remember that what is being measured is performance against each institution's own objectives.

  Measurement is made by considering six aspects of provision. The aspects are:

    —  Curriculum design, content and organisation.

    —  Teaching, learning and assessment.

    —  Student progression and achievement.

    —  Student support and guidance.

    —  Learning resources.

    —  Quality assurance and enhancement.

  A full statement of the factors that are taken into account in assessing each aspect of provision can be found in the "Subject Review Handbook" published by the Agency. Each aspect is graded on a scale of 1 to 4. The scores relate specifically to the aims and objectives of the programmes.

  Aims are usually set at a higher level of generality than objectives. For example, the aims of a suite of degree programmes in general engineering in one institution are to provide:

    "A broader engineering education than is offered in single subject departments, and to provide sufficient depth of knowledge to satisfy the accreditation requirements of the professional engineering institutions."

  A more specific objective of the same group of programmes is that on graduation a student will have:

    "Developed the ability to apply knowledge and understanding in the process of engineering design."

  Aims will vary with the level of the programme. For example, an MSc programme in media and communications taught in the Department of Social Psychology of a major university aims:

    "To provide a high quality postgraduate education which introduces students to major social scientific approaches to media and communications;" and "to provide a research training, recognised by the ESRC."

  By contrast the aims of a further education college offering a range of HND programmes in communication and media studies include:

    "To provide the training, education and skills experience that will enable students to work effectively and cohesively as part of a technical crew."

  Within first degree programmes there can be differing objectives depending upon the intended career path of the student. For example, one four-year accredited MEng programme has as an objective:

    "To provide sufficient breadth and depth of study to satisfy the requirements of the professional institutions, leading to chartered status."

  In the same department, a three-year non-accredited BSc programme has the objective:

    "To provide a shorter route for students who require a sound engineering education, but do not immediately seek chartered status."

  It is against such aims and objectives that judgements are made in each aspect of provision. The following tests are applied:

    —  "To what extent do the student learning experience and student achievement, within this aspect of provision, contribute to meeting the objectives set by subject provider?

    —  Do the objectives set, and the level of attainment of those objectives, allow the aims set by the subject provider to be met?"

  The scores allocated reflect the following judgements:

    1.  The aims and/or objectives set by the subject provider are not met; there are major shortcomings that must be rectified.

    2.  This aspect makes an acceptable contribution to the attainment of the stated objectives, but significant improvement could be made. The aims set by the subject provider are broadly met.

    3.  This aspect makes a substantial contribution to the attainment of the stated objectives, however, there is scope for improvement. The aims set by the subject provider are met.

    4.  This aspect makes a full contribution to the attainment of the stated objectives. The aims set by the subject provider are met.

Does a number tell the whole story?

  Not necessarily! Scores of 4 or 1 are pretty unequivocal. Scores of 2 or 3 need to be looked at with greater care. It will be necessary to read the narrative of the report.

  A score of 3 means that the aspect in question is making a substantial contribution to the aims and objectives but there is scope for improvement. A student contemplating enrolling on a course might like to check on just what it is that was identified as needing improvement.

  For example, in one programme teaching, learning and assessing scored 3. Teaching quality was reported to be high and workshop sessions were used effectively to give students the opportunity to develop and apply skills. However, there were problems with assessment and particularly in relation to providing feedback to students. The assessors criticised this and reported that external examiners had also commented adversely upon it. In this case the grade 3 tells the reader that something is not as good as it could be. The narrative discloses that there was nothing wrong with the quality of teaching or the practical learning opportunities, but there was quite a serious problem with providing feedback to students from their assessments.

  In another example, a mathematics department scored only 2 in the curriculum aspect. The department provided three types of programme. The first was an access programme that assessors praised. The second was service teaching provided to science and engineering departments. Again, the assessors found this to be of quite good quality. However, the curriculum for the degree programme was weak, particularly in relation to the third year where assessors felt that it did not match up to the normal expectations of the final year of an honours degree programme. A student contemplating a single honours programme in mathematics would be best advised to look elsewhere. But a student needing to acquire the mathematical skills necessary to tackle a science or engineering programme could be well served by that institution.

  Each aspect, but particularly teaching, learning assessment, is made up or a number of elements. The numerical summary is bound to reflect a balance of strengths and weaknesses. Similarly, a subject review may cover a range of programmes, some of which may be better than others. Again, the score will reflect the balance of strengths and weaknesses. A department with a weak degree programme could be saved from an unsatisfactory marking by strong HND provision, resulting in an overall grade of 2 A small, but poorly planned master's programme could pull a department down from a 4 to a 3 on the curriculum aspect, despite excellent undergraduate provision.

  Any system of numerical reporting on diverse and complex provision is bound to contain an element of compromise and averaging. It is important for users of the information to read the narrative to find out just where any weakness actually lies.

USING THE GRADED PROFILE TO COMPARE INSTITUTIONS

  The graded profile is not designed for the purpose of making inter-institution comparisons. Nevertheless, it is inevitable that the figures will be used in this way; by institutions proclaiming good scores as evidence of their excellence, and by journalists constructing league tables for publication.

  Any league table based upon TQA scores should come with a health warning that explains that like is not being compared, strictly, with like. At best, at equivalent levels, broadly similar is being compared with broadly similar.

  The validity of comparisons can be enhanced in several ways.

Scope of comparison

  Comparisons by subject have a greater validity than comparisons between whole institutions. Universities are large and complex organisations, all will have areas of relative strength and relative weakness. Potential students will be as much interested in the subject to be studied, as in the institution as a whole. Combining subject based TQA scores across an entire institution carries with it all the risks inherent in averaging averages.

  More recent information is of greater validity than older information. A subject review reports on the state on provision at the time at which the review took place. If this is several years ago, it is likely that there will have been changes. Weak provision may have been improved as a result of inadequacies having been identified and corrected. There will be some turnover of staff, those who achieved a result reported five years ago may have retired or moved on, and could have been replaced by either stronger or weaker staff. There is a strong case for using only information of comparable age and for regarding older information as being of primarily historical interest.

  When TQA was first introduced outcomes were expressed as "excellent", "satisfactory" or "unsatisfactory". Institutions carried out a self-assessment to determine the category into which each part of their provision would fall. HEFCE did not visit provision that was self-assessed as "satisfactory". The method was changed to one of universal visiting, with all provision being assessed directly by external assessors. The graded profile was first introduced in 1995 and has remained in use since then. There is no reliable means of equating a particular aggregate TQA score with an earlier classification of "excellent".

Size of provision

  There is no standard unit of assessment. Programmes may be looked at in groups of differing size, or even individually. Much will depend upon the way in which programmes are organised within an institution, and on relative student numbers. For example, an institution having five hundred modern language students could have five separate assessments in Russian, German, French, Italian and Spanish thus resulting in five TQA scores. Alternatively, similar provision in another institution might be assessed as a whole as modern languages, thus producing only one TQA score. Some league tables attribute equal weight to each TQA score, regardless of the total student numbers involved. In such a case, an institution that was weak in modern languages would benefit by having it assessed as a single unit; whilst one that was strong ould benefit by having it assessed as separate units. If TQA scores within a subject area are aggregated for the purposes of comparison, it would be appropriate to weight them by student numbers.

A single number

  There is no satisfactory way of reducing the multi-faceted judgements represented by a graded profile into a single number that can then be used to construct a league table.

  A score of 21 made up of three 4s and 3s might be regarded as quite a good result. However a score of 21 made up of five 4s and a 1 is unsatisfactory, as any score of 1 results in quality not being approved. This is not altogether a hypothetical example, there is a recent case of a profile of 4, 4, 4, 4, 3, 1. Despite the overall total of 20 the provision was deemed unsatisfactory. A simple addition can conceal significant weaknesses.

  Simple addition of scores across a profile assumes that equal weight should be attached to each aspect. When a graded profile is read as such there is no need for the aspects to be weighted, because each is considered individually. However, if scores are combined across the profile, issues of relative weight can arise.

  Each aspect is important in its own right. However, aspects may have differing levels of importance in relation to each other, depending upon the circumstances of individual programmes and students.

  If the curriculum is not designed so as to achieve the intended outcomes of a programme, or if assessment is incapable of measuring the attainment of those outcomes, then no amount of sympathetic student guidance is going to put that right. In that sense, the first two aspects of the profile are of fundamental importance to any programme.

  However, for a student choosing between two well designed programmes, other aspects could be critical. A confident student with good learning skills might attach high importance to learning resources that could be used independently. A student with less well developed study skills might regard student support as being of paramount importance.

  Whilst it is relatively easy to identify aspects where failings could be very damaging to a programme, the relative significance of other aspects will vary according to institutional context and individual student needs. Any aggregation of numbers across the profile, weighted or not, cannot reflect this.

  A further consideration is the impact of a bad score. A 1 in the profile means that quality is not approved. Effectively, for purposes of comparison, a 1 in the profile reduces the aggregate score to zero. However, what is the adverse weight that should be attached to a score of 2? This indicates that significant improvement could be made and, due to the effect of averaging within an aspect, could indicate that some part of an aspect is actually unsatisfactory.

  All of this illustrates the degradation of data that is bound to occur when a profile representing a complex series of judgements is reduced to a single number. Newspaper league tables are a fact of life, but those reading them should be aware of the over-simplification that results from converting profiles to a single number.

USING GRADED PROFILES TO DEVELOP AN INSTITUTIONAL QUALITY ENHANCEMENT STRATEGY

  Subject review results for an institution across a range of subjects form a matrix. Attention should be given not only to the profiles for each subject, but also to the vertical columns which show how well the institution is delivering within each aspect across its range of provision. This information is particularly useful when looking at performance in those aspects that depend to a large extent upon institution wide services, that is the second half of the profile.

  It is reasonable for a good institution to aspire to a high proportion of 4s in all columns. Every institution will have its strengths and weaknesses but, overall, strengths should predominate.

  For example, if across more than twenty graded profiles, an institution achieved grades of 4 in the quality aspect in only five profiles, whilst scoring only 2 in four profiles and in one, 1, this would suggest a fairly significant weakness in institutional quality assurance. The same institution might have slightly disappointing scores in the curriculum aspect (ten 4s but three 2s) and in teaching, learning and assessment (only eight 4s and one 2). Nevertheless resources (sixteen 4s and no 2s) and student support (fifteen 4s and no 2s) might be good, with students progressing well (fifteen 4s and no 2s). This could paint a picture of an institution that is well resourced, and which is able to attract good students, perhaps due to a well-established reputation. Nevertheless, taken together these scores could suggest a potential problem, with relatively poor ratings in overall quality systems working through to some apparent under-achievement in curriculum design and teaching quality.

  Similar patterns may be seen in other institutions. One having twelve graded profiles over a three year period gained only five 4s in each of the curriculum and teaching/learning/assessment aspects. The quality column had only four 4s in it but two 2s.

  Even the best institutions can use the data from the graded profiles to identify areas for improvement. One institution, with eight graded profiles over the last three years, has dropped only eleven points from the maximum total available. However, six of these are in quality management and enhancement, suggesting that this is an area where some university wide attention might be needed. Similarly, another institution with nine graded profiles in the last three years dropped only fifteen points from the maximum available. Six of these were lost in the teaching, learning and assessment aspect. A reading of the reports shows that there were no problems with teaching and learning, but it is assessment that has some scope for improvement. Again, a university wide focus on this could pay dividends.

CONCLUSION

  The graded profiles are a rich mine of information. They have most to yield when they are used in the manner for which they were originally intended, looking separately at each aspect of provision to identify strengths and weaknesses. They can be a particularly powerful aid to enhancement of quality when used to identify areas of improvement that are common to a number of subjects and which could be addressed by an institution wide enhancement strategy. They enable students and employers to identify those institutions that have a consistently good record in delivering programmes that meet their intended outcomes.

  The profiles were not designed for use in the construction of league tables comparing whole institutions, and those using them for that purpose would be well advised to make clear the limitations and simplifications inherent in using the data in that way.

Quality Assurance Agency for Higher Education
February 2000


 
previous page contents next page

House of Commons home page Parliament home page House of Lords home page search page enquiries index

© Parliamentary copyright 2001
Prepared 19 February 2001