Output
COVID-19 Related Research Projects
1a. LC Calculated Grades: Teachers' Reflections on the Process and on Assessment
Project Directors: Audrey Doyle, Zita Lysaght and Michael O'Leary.
This project is focused on post primary teachers’ reflections on their experiences of estimating marks/ranks for their students as part of the LC2020 Calculated Grades process in schools. The project also explores teachers’ reflections on the role they play in assessment. A survey of a voluntary sample of teachers took place during the Autumn of 2020.
1b. LC2021 Accredited Grades
Project Directors: Audrey Doyle, Zita Lysaght and Michael O'Leary.
A survey of teachers' reflections on the process and on assessment took place in November 2021.
2. Remote Proctoring
Project Directors: Gemma Cherry (CARPE), Oksana Noumenko (Prometric) and Michael O'Leary (CARPE).
Remote or online proctoring refers to the process of using technology in lieu of face-to-face proctoring when examinations are administered online. Using classical test theory (CTT) and item response theory (IRT) methods, a study was conducted by CARPE and Prometric personnel to investigate the psychometric equivalence of performance score results, achieved by candidates taking a professional licensure examination in the US via remote and in-person proctoring modes. The data used for analyses come from administrations of the examinations pre and post Covid-19.
3. High Stakes Assessment in the Era of COVID-19: Interruption, Transformation or Regression?
Project Directors: Louise Hayward (University of Glasgow) and Michael O’Leary (CARPE).
This special issue of Assessment in Education: Principles, Policy and Practice will be edited by Louise and Michael and seeks to contribute to the debate about what we have learned from the COVID experience and how that learning might inform the future of high stakes assessment both for individual nations and internationally.
4. A Conceptual Framework for Exploring Change in Teacher Assessment Agency
Project Directors: Louise Hayward (University of Glasgow), Jana Groß Ophoff (University of Tuebingen), Sotiria Kanavidou (University of Southampton), Michael O’Leary (CARPE), Dennis Opposs, (Ofqual).
This research is focused on how teacher agency in assessment played out in the high stakes context of terminal examinations at the end of secondary education in England, Germany, Greece, Scotland and Ireland through an analysis of key policy documents. The work was conducted under the auspices of the International Congress for School Effectiveness and Improvement (ICSEI).
5. High Stakes Examinations in the Era of Covid-19
Project directors: Vasiliki Pitsia & Michael O’Leary (CARPE). Marguerite Clarke, Diego Armando, Luna Bazaldua, Julia Liberman and Victoria Levin (World Bank).
This research sets out to capture the diversity of responses across countries when the outbreak of Covid-19 in early 2020 placed plans to hold high stakes examinations at the end of post primary school in jeopardy. Countries from different parts of the world and with different economic profiles are organised into two main categories: (a) those that continued with their usual examinations and (b) those that did not. This report describes the different approaches adopted by each of these countries, reviews the evidence on what worked well and what did not, and highlights lessons learned. The project is a joint venture between research at CARPE and the World Bank.
_________________________________________________________________________________________________________
Remote Proctoring in Credentialing Examination Contexts
Research Memos on Remote Proctoring
Project Directors: Paula Lehane & Conor Scully (CARPE)
Two memos prepared for Prometric updated the research brief on remote proctoring submitted to Prometric by Karakolidis, O’Leary and Scully in 2017. Focusing on research published over the past five years, memo one examined literature on the psychometric properties of RP examinations, the candidate experience, and test security. In the second memo consideration was given to policies, procedures and regulations (including legal regulations) when using remote proctoring for online licensure and certification tests. Recommendations are provided in both memos for how Prometric can develop and administer RP assessments in line with best practice. The reports are not currently available to the public.
_________________________________________________________________________________________________________
Twenty Five Years of Research on Leaving Certificate Assessment
Project Directors: Michael O'Leary (CARPE) and Gillian O'Connor
The project is focused on developing a structured/searchable database of all academic papers and research reports that refer to LC assessment published between 1995 and 2020. Each entry contains the full citation, an abstract, type of publication, key themes explored and details pertaining to methodology, sample size and key informants for empirical studies. The database currently has 100 entries, and all are hyperlinked to a digital copy of the paper/report.
_________________________________________________________________________________________________________
Assessment of Bullying in the Workplace Project
Project Directors: Zita Lysaght, Angela Mazzone (Anti Bullying Centre) Michael O'Leary & Conor Scully with Anastasios Karakolidis (ERC), Paula Lehane, Larry Ludlow (Boston College), Sebastian Moncaleano (Boston College), & Vasiliki Pitsia
This project is focused on creating a measurement instrument that can be used to assess people's ability to identify bullying in the workplace. For the purposes of this study, workplace bullying is conceptualised as behaviours that involve an imbalance of power, that are repeated over time, that are intentional and that make the target feel threatened, humiliated, stressed, or unsafe at work. The research is a collaborative venture involving CARPE and the Anti Bullying Centre (ABC) at DCU.
_________________________________________________________________________________________________________
Assessment of Learning about Well-Being Project
Project Directors: Darina Scully (School of Human Development), Nisha Crosbie/Deirdre O'Brien (School of Psychology) & Michael O’Leary (CARPE)
Wellbeing of the child/young person and its significance for developmental and educational outcomes are unequivocal. There is an abundance of instruments in existence that purport to measure various aspects of wellbeing, or an individual's subjective state of wellbeing. However, a heretofore understudied area is how young people's knowledge and understanding of the concept can be assessed. Wellbeing has been identified as a key curricular area in the reformed Junior Cycle programme, and the NCCA's Guidelines for Wellbeing in Junior Cycle (2017) call for the use of a wide variety of approaches in assessing students' learning in this area. Consequently, the development of tools that can aid student and teacher judgement making about students' progress in knowing about and understanding wellbeing may prove very useful. With this in mind, this study seeks to examine the potential use of scenarios/vignettes to achieve this.
_________________________________________________________________________________________________________
Student Experience of Feedback
Project leaders: Michael O'Leary, Zita Lysaght & Sean McGrath (Glanmire College)
This study was conducted jointly by CARPE and Glanmire Community College, Cork and was designed to gather data on how second year students in school experience feedback from their teachers. Using an online questionnaire, the study aimed to gather data from students on variables such as how often they receive feedback and what types of feedback they find most useful.
_________________________________________________________________________________________________________
Assessment for Learning and Teaching (ALT) Project
Project Directors: Zita Lysaght
The Assessment for Learning and Teaching Project (ALT) project has its roots in assessment challenges identified from research conducted in the Irish context. This research highlighted: (a) The dearth of assessment instruments nationally and internationally to capture changes in children’s learning arising from exposure to, and engagement with, AfL pedagogy; (b) The nature and extent of the professional challenges that teachers face when trying to implement AfL with fidelity and; (c) The urgent need for a programme of continuous professional development to be designed to support teachers, at scale, to learn about AfL and integrate it into their day-to-day practice.
Since the initiation of the ALT project, significant progress has been made in all three areas: The Assessment for Learning Audit instrument (AfLAi) has been used across a range of Irish primary schools and in educational systems in Australia, Norway, Malaysia, Chile and South Africa. Work is currently underway in adapting the AfLAi for use in secondary schools and by students in both primary and secondary settings. The research focused Assessment for Learning Measurement instrument (AfLMi), first developed in 2013, is being updated with data from almost 600 Irish primary teachers. Programmes of professional development continue to be implemented in pre-service undergraduate teacher education, in post graduate teacher education and as part of site based in-service teacher education.
_________________________________________________________________________________________________________
Minecraft in Irish Primary and Post-Primary Schools
Project Directors: Paula Lehane (CARPE), & Deirdre Butler (Institute of Education)
Minecraft is a ‘sandbox’ video game first released to the public in 2009, where players control a virtual avatar in a Lego-like world made up of blocks that can be moved to construct buildings and used to create items and structures. It is currently the second most popular video game of all time, with more than 100,000,000 copies sold worldwide. Schools in many countries, including the United States of America and Sweden, have decided to integrate the education version of the game (MinecraftEdu) into their curricula. MinecraftEdu is a platform that allows students in schools to freely explore, imagine and create in virtual environments and collaborative worlds that have special features specifically designed for classroom use. In DCU, the Institute of Education (IoE) has a dedicated Minecraft Studio (opened in December 2018) that student teachers can use to explore how innovative virtual and physical learning spaces can transform the curriculum and engage young people with new educational environments.
_________________________________________________________________________________________________________
Inter-Rater Reliability in the Objective Structured Clinical Examination (PhD Project)
Project Director: Conor Scully (PhD Candidate); Project Supervisors: Michael O'Leary, Mary Kelly & Zita Lysaght
Conor's thesis examined the issues of inter-rater reliability and validity in the Objective Structured Clinical Examination (OSCE), an assessment format common in medicine and nursing. Using a mixed-methods approach, he sought to understand how OSCE assessors interpret and understand student performances in the exam. It is hoped that this understanding will allow for more reliable inferences to be made on the basis of OSCE scores and a higher quality assessment overall.
_________________________________________________________________________________________________________
Embedding the Assessment of Emotional Intelligence within Collaborative Problem-Solving Tasks: An Exploratory Study (PhD Project)
Project Director: Deirdre Dennehy (PhD Candidate); Project Supervisors: Michael O'Leary, Zita Lysaght
Emotional Intelligence (EI) assessment is significant against the background of global interest in transversal skills assessment which were previously termed 21st century skills. The term transversal skills refer to key abilities and aptitudes which are transferable across all areas of modern life and are pertinent to overall successful functioning in a digitalised society (May et al., 2015, Munro, 2017). Historically, the type of knowledge that was esteemed by humanity was content and knowledge-based. However, the development of the globalized market and advances in technology have altered the skills that are required for many careers. Today, many jobs require individuals to collaborate, communicate and use their interpersonal skills to a high level. An individual who is skilled in perceiving, managing and using their emotions will flourish in these types of problem-solving environments. As a result, the domain of EI education and assessment has attracted substantial interest from economic organisations and educational settings alike.
However, there are significant limitations facing current EI measures. The majority are text-based and assess EI in isolation. This may not adequately reflect how an individual exhibits their EI skills in real-life as these are frequently demonstrated in tandem with other important cognitive skills like problem-solving. There is a need therefore for the development of authentic high-fidelity EI assessments which capture the dynamics of true human interaction. This study attempted to embed an EI assessment within an existing problem-solving assessment. Technology assisted in creating an evaluation which is both time and user friendly. This PhD project aimed to contribute to this field of research by serving as an exploratory blueprint for the future development of authentic EI assessments.
_________________________________________________________________________________________________________
Measuring Non-Cognitve Factors
Project Directors: Lisa Abrams (Virginia Commonwealth UNiversity), Mark Morgan (DCU) & Michael O'Leary (CARPE)
Cognitive skills involve conscious intellectual effort, such as thinking, reasoning, or remembering. In contrast, non-cognitive skills are related to other important interpersonal or ‘soft skills’ like motivation, integrity, persistence, resilience and interpersonal interaction. These non-cognitive factors are associated with an individual’s personality, temperament, and attitudes. Research at the international, national and school level is increasingly looking at the value of non-cognitive skills and at how education systems impact their development. As demand for these skills will continue to change as economies and labor market needs evolve, with trends such as automation causing fundamental shifts, this is an issue that should be addressed by researchers and those in industry.
_________________________________________________________________________________________________________
Teacher Assessment Literacy - Scale Develoment Project
Project Directors: Zita Lysaght, Darina Scully, Anastasios Karakolidis, Vasiliki Pitsia, Paula Lehane & Michael O’Leary (CARPE)
Assessment literacy (Stiggins, 1991) has long been viewed as an important characteristic of effective teachers. Assessment literacy can be defined as “an individual's understandings of the fundamental assessment concepts and procedures deemed likely to influence educational decisions” (Popham, 2011, p. 267). Correct use of different assessment types and forms, accurate administration and scoring of tests, appropriate interpretation of student performance etc., all form part of a teacher’s assessment literacy. At present, very few objective measures of teacher assessment literacy exist.
_________________________________________________________________________________________________________
Assessment of Critical Thinking in Dublin City University (ACT@DCU)
Project Director: Michael O'Leary (CARPE)
ACT@DCU investigated the extent to which an online test developed by the Educational Testing Service (ETS) in the United States to assess critical thinking in higher education was suitable for use in DCU. Findings from the initial validation study of the test using data from DCU students can be read here.
Over time the intention is that that data from the test will help to facilitate conversations among staff regarding pedagogy, curricula and educational interventions to improve teaching and learning of CT; be integrated with other non-cognitive and co-curricular indicators of student success at DCU; and provide evidence of institutional and program-level learning outcomes in CT.
_________________________________________________________________________________________________________
NCCA Assessment of Live Remote Proctoring
Project directors at CARPE: Gemma Cherry, Michael O'Leary and Darina Scully.
This study, conducted under the auspices of the National Commission for Certifying Agencies (NCCA) in cooperation with CARPE and Prometric, was undertaken to evaluate the extent to which credentialing testing programs in the US using remote proctoring were meeting NCCA Standards. Live remote proctoring (LRP) was defined by the Commission as remote proctoring that occurs with a person actively watching and monitoring a candidate during the time of the test administration and that provides safeguards for exam integrity and validity similar to in-person proctoring. Nine programes volunteered to participate and submitted self-study reports in June 2020, including a technical report, that compared outcomes based on LRP and other delivery methods (computer-based testing and paper-based testing). A subset of the NCCA Standards was used to evaluate each program’s self-study report. The report published in February 2021 can be accessed here.
_________________________________________________________________________________________________________
The use of cross-national achievement surveys for education policy reform in the European Union: Ireland
Project Leaders: Anne Looney, Michael O'Leary, Gerry Shiel & Darina Scully
This research contributed to a book volume that examined the range and salience of different international achievement surveys for policy design and reform within European countries: Germany, France, Italy, Netherlands, Sweden, Finland, Ireland, Poland, Estonia, and Slovakia. Collectively, the national profiles provide a critical analysis of the use (and misuses) of cross-national achievement surveys for monitoring educational outcomes and policy formation.
_________________________________________________________________________________________________________
Assessment of Transversal Skills in STEM
Project Partners: CARPE, NIDL (National Institute for Digital Learning), CASTeL (Centre for the Advancement of STEM Teaching and Learning), and representatives from education ministeries in the following countries: Ireland, Austria, Cyprus, Belgium, Slovenia, Spain, Finland and Sweden
This was an ambitious DCU led project that secured €2.34 million in Erasmus+ funding. Involving 8 EU countries (Ireland, Austria, Cyprus, Belgium, Slovenia, Spain, Finland and Sweden) and working with 120 schools across Europe, the partners devised, tested and scaled new digital assessments for STEM education that engaged and enhanced students’ transversal skills such as teamwork, communication and discipline-specific critical thinking. CARPE personnel worked with DCU colleagues to provide the theoretical and operational frameworks of the research (report #5). CARPE was also responsible for a review and synthesis of the research literature on STEM formative digital assessment (report #3) and for a report on virtual learning environments (VLEs) and digital tools for implementing formative assessment in STEM (report #4). These reports highlight how students can best be scaffolded towards the development of key STEM skills and how digital tools can capture the evidence for this and augment teaching practices to help provide constructive feedback on student progress. A paper outlining the workings of the project to date was published by the European Association of Distance Teaching Universities (The Netherlands) in October 2021.
_________________________________________________________________________________________________________
Interviews as a Selection Tool for Initial Teacher Education
Project Directors: Paula Lehane, Zita Lysaght & Michael O'Leary (CARPE)
Even when other factors such as student background and prior attainment are controlled for, having a ‘good’ teacher is one of the most important predictors of student success (Slater et al., 2009). Therefore, the goal of Initial Teacher Education (ITE) in Ireland should be to produce these ‘good’ teachers for employment in primary and post-primary schools. To achieve this, the admissions procedures for ITE programmes have a responsibility to select those applicants who are most suited to the profession and most likely to succeed in the required preparatory courses.
Many countries, including Ireland, now consider a range of admission criteria and selection tools when screening applicants for entry to ITE. Most Irish institutions use applicant performance on an interview as a selection tool for postgraduate ITE (Darmody & Smyth, 2016). However, research on the efficacy of interviews as a selection measure for ITE programmes is mixed. CARPE conducted an in-depth literature review that synthesises what research has found about the efficacy, or otherwise, of interviews as a selection mechanism for university based postgraduate programmes of teacher education. Based on this review, recommendations for future practice and policy were formulated. An article based on this research and published in 2021 can be accessed here.
_________________________________________________________________________________________________________
Irish primary and post-primary students’ performance at the upper levels of achievement in mathematics and science across national and international assessments (PhD Project)
Project Director: Vasiliki Pitsai (PhD Candidate); Project Supervisors: Michael O'Leary, Gerry Shiel, Zita Lysaght
High achievement at school is a strong predictor of students’ future professional and social success, and of a country’s future economic development and sustainability. High achievement in mathematics and science has been linked to building a knowledge society and driving sustainable economic growth, while also delivering social recovery. Therefore, it is important that educational systems promote and reward high achievement, especially the knowledge and skills that are deemed necessary for developing a smart economy and for living and working in the 21st century. While, on average, students in Ireland have often performed well on national and international assessments of mathematics and science, there is a notable absence of higher-achieving students (those who score at the highest proficiency levels). This study undertook an in-depth investigation of the nature of high achievement in mathematics and science in Ireland, using large-scale databases from the Programme for International Student Assessment (PISA), the Trends in International Mathematics and Science (TIMSS) study, Irish National Assessments and Irish state examinations (Junior and Leaving Certificates).
This PhD project contributes to this field of research by addressing the following research questions:
- What are the background characteristics of high achievers in mathematics and science in national and international assessments in Ireland and how do these characteristics differ from their counterparts’ in countries with average achievement similar to Ireland?
- Which factors at the student, home, class, and school level can predict high mathematics and science performance in national and international assessments in Ireland?
- Which subdomains of mathematics and science do high achievers in Ireland do well on, and which aspects do they struggle with? Are there factors at the student, home, class, and school level that may predict higher or lower performance of high achievers in Ireland in specific subdomains of mathematics and science?_________________________________________________________________________________________________________
Multimedia Items in Technology-Based Assessments (PhD Project)
Project Director: Paula Lehane (PhD Candidate); Project Supervisors: Michael O'Leary, Mark Brown, Darina Scully
Using digital devices and technology to conduct assessments in educational settings has become more and more prevalent in recent times. Indeed, it now seems inevitable that future assessments in education will be administered using these media (OECD, 2013). Therefore, it is essential that educational researchers know how to design reliable and appropriate technology-based assessments (TBAs). However, no guidelines for the design of TBAs exist. Although TBAs have many medium-unique items, including multimedia objects such as animations and videos, their impact on test-taker performance and behaviour, particularly in relation to attentional allocation and information processing, has yet to be fully clarified.
This PhD project contributes to this growing field of research by addressing the following research questions:
- How do test-takers allocate attention in TBAs that include multimedia items?
- What is the impact of multimedia items on test-taker performance in TBAs?
- Is there a difference in test-taker performance and attentional allocation behaviours in TBAs involving different types of multimedia items?
- What are the meaningful relationships, patterns and clusters in performance data that can be used to assess and score problem-solving skills in TBAs?
_________________________________________________________________________________________________________
Test Specifications in Certification and Licensure Assessments
Project Directors: Michael O’Leary (CARPE), Lisa Abrams (Virginia Commonwealth University) & Katherine Reynolds (Boston College)
Specifying test content, often in the form of professional knowledge, skills and judgments (KSJs), prior to item development is fundamental to test quality in the field of certification and licensure. Alignment between test items and KSJs can serve as a critical piece of content-related validity evidence for a testing program. Alignment studies, common in high-stakes achievement testing, are less frequent in credentialing and licensure. This research explored the application of the Webb model (2006), a popular alignment approach in educational settings, for use in professional testing. The Webb model provides four indices of alignment: categorical congruence, depth of knowledge consistency, range of knowledge correspondence and balance of representation. Together, these four indices can be taken as evidence of alignment between assessment items and KSJs, providing content validity evidence for a testing program. This form of validity evidence is particularly important, given that US test developers have a legal mandate to ensure test content is reflective of the knowledge, skills and judgements in a given profession. A paper outlining how a Webb alignment study might be carried out in a professional testing context and how such a study proceeds in practice was published in 2020 (available here).
_________________________________________________________________________________________________________
Standardised Assessment in Reading and Mathematics Project
Project Directors: Michael O’Leary (CARPE), Zita Lysaght (DCU IoE), Deirbhile Nic Craith (INTO) & Darina Scully (CARPE)
Since the publication of the Assessment Guidelines for Primary Schools in 2007, there has been a stronger focus on assessment in primary schools. There are many forms of assessment, of which standardised testing is one. Standardised tests have gained in importance since 2012 when schools have been obliged to forward the results of standardised tests to the Department of Education and Science.
The purpose of this research was to explore the use of standardised tests in literacy and numeracy in primary schools in Ireland (ROI). Issues addressed include teachers’ understanding of standardised tests, how standardised tests are used formatively and diagnostically and the experiences of schools in reporting on the results of standardised tests. Data on teachers' professional development needs with respect to standardised testing were also gathered. Following a year-long development and piloting process, a questionnaire was distributed in hard copy and online to a random sample of 5,000 teachers in May 2017. Over 1500 teachers returned completed questionnaires and the findings were released in June 2019, along with a number of policy recommendations to help address the needs and concerns of teachers regarding the use of standardised tests in primary schools.
_________________________________________________________________________________________________________
Animations for Large Scale Testing Programmes Project
Project Director: Anastasios Karakolidis (PhD Candidate); Project Supervisors: Michael O’Leary and Darina Scully
Although technology provides a great range of opportunities for facilitating assessment, text is usually the main, if not the only, means used to explain the context, present the information, and communicate the question in a testing process. Written language is often a good fit for measuring simple knowledge-based constructs that can be clearly communicated via text (such as historical events), nevertheless, when assessments provide test takers with plenty of sophisticated information in order to measure complex constructs, text may not be suitable for facilitating this process (Popp, Tuzinski, & Fetzer, 2016). Animations could be a pioneering way of presenting complex information that cannot be easily communicated by text/written language. However, research literature on the use of animations in assessment is currently scarce.
Anastasios' recently completed PhD project focused on (a) the development and validation of an animation-based assessment instrument, (b) the investigation of test-takers’ views about this instrument and (c) the examination of the extent to which this animated test provides a more valid assessment of test-takers’ knowledge, skills and abilities, compared to a parallel text-based test.
___________________________________________________________________________________________________________
Computer Based Examinations for Leaving Certificate Computer Science
Project Director: Paula Lehane (CARPE) with the National Council for Curriculum and Assessment (NCCA)
In line with the recommendations of the Digital Strategy for Schools (Department of Education and Skills [DES], 2015), a more formal approach to the study of technology and computing in second-level schools has been established thanks to the newly developed Computer Science (CS) curriculum for Leaving Certificate students. In September 2018, forty schools were selected to trial the implementation of this subject which will culminate in an ‘end-of-course computer-based examination’ in 2020 (National Council for Curriculum and Assessment [NCCA]). This examination will represent 70% of a student’s overall CS grade.
The use of a computer-based exam (CBE) for the assessment of CS students is a significant departure in tradition for the Leaving Certificate programme. All other subjects in the Leaving Certificate involving an end-of-course examination employ paper-based tests. The planned CBE for CS will represent the first of its kind in the Irish education system when it is introduced in 2020. This challenge of developing and delivering a high-stakes CBE is also magnified by the inherent difficulties associated with the evaluation of students’ knowledge and learning in computing courses (Kallia, 2018). Therefore, to ensure that the pending CS exam delivers a CBE in a responsible manner that preserves the fairness, validity, utility and credibility of the Leaving Certificate examination system, CARPE was commissioned by the NCCA to write a report outlining what factors pertaining to the design, development and deployment of this CBE will need to be considered. The aim of this report is to guide the decisions of policy-makers and other relevant stakeholders. The report is available here.
________________________________________________________________________________________________________
Assessment in the re-developed Primary School Curriculum
Project Directors: Zita Lysaght, Darina Scully and Michael O'Leary (CARPE); Damian Murchan (TCD) & Gerry Shiel (ERC)
The National Council for Curriculum and Assessment (NCCA) is working with teachers and early childhood practitioners, school leaders, parents and children, management bodies, researchers and other stakeholders to develop a high-quality curriculum for the next 10-15 years. A discussion paper written by researchers from CARPE, TCD and the ERC highlights the importance of aligning assessment, learning and teaching in curricular reform and implementation. It is available to read here.
_________________________________________________________________________________________________________
The Leaving Certificate as Preparation for Third Level Education Project
Project Directors: Darina Scully & Michael O'leary (CARPE)
The Leaving Certificate Examination (LCE) plays a crucial role in the process of how people are selected for third level education. However, the extent to which the Leaving Certificate Programme (LCP) as a whole (i.e., 5th and 6th year + the examination) provides students with a good preparation for their Third Level education is unclear. This project aimed to shed some light on this issue.
For those who sat the LCE in 2017, their experiences of 5th and 6th year and preparing for and taking the LCE were still fresh in their minds as they started college in Sepetmber 2017. They also had a good understanding of what is being required of them in college by March 2018. With this in mind, this project gathered data from first year students at DCU in April 2018 who were in a position to offer important insights that can be used to evaluate the LCP and its relevance to first year in college.
Findings from the study are available here.
_________________________________________________________________________________________________________
State-of-the-art in Digital Technology-Based Assessment Project
Project Directors: Michael O'Leary, Darina Scully, Anastasios Karakolidis & Vasiliki Pitsia
Following an invitation to contribute to a special issue of the European Journal of Education, a peer-reviewed journal covering a broad spectrum of topics in education, CARPE completed an article on the state-of-the-art in digital technology based assessment. The article spans advances in the automated scoring of constructed responses, the assessment of complex 21st century skills in large-scale assessments, and innovations involving high fidelity virtual reality simulations. An "early view" of the article was published online in April 2018, with the special issue (focused on the extent to which assessments are fit for their intended purposes) due to be published in June 2018.
_________________________________________________________________________________________________________
Learning Portfolios in Higher Education Project
Project Directors: Darina Scully, Michael O'Leary (CARPE) & Mark Brown (NIDL)
The ePortfolio is often lauded as a powerful pedagogical tool, and consequently, is rapidly becoming a central feature of contemporary education. Learning portfolios are a specific type of ePortfolio that may also include drafs and 'unpolished work', with the focus on both the process of compiling the portfolio as well as the finished product. It has been hypothesized that learning portfolios may be especially suited to the development and assessment of integrated, cross-curricular knowledge and generic skills/attributes (e.g. critical thinking, creativity, communication, emotional intelligence), as opposed to disciplinary knowledge in individual subject areas. This is of particular interest in higher education contexts, as universities and third-level face growing demands to bridge a perceived a gap between what students learn, and what is valued by employers.
In conjunction with the NIDL, CARPE have completed a comprehensive review examining the state of the field regarding learning portfolio use in third level education. Specifically, this review (i) evaluates the extent to which there is sufficient empirical support for the effectiveness of these tools, (ii) highlights potential challenges associated with their implementation on a university-wide basis and (iii) offers a series of recommendations with respect to ‘future-proofing’ the practice.
The review was formally launched in February 2018, and has garnered a great deal of attention in the intervening months. A roundtable discussion to discuss possible research opportunities within DCU on the basis of the findings is due to be held in May 2018. In addition, selected findings will be disseminated at various international conferences, including EdMedia in June 2018 (Amsterdam, Netherlands) and the World Education Research Association (WERA) in August 2018 (Cape Town, South Africa). The review is also in the process of being adapted and translated into Chinese by Prof. Junhong Xiao of Shantou Radio and Television University, with the translated article to feature in an upcoming addition of the peer-reviewed journal Distance Education in China, and CARPE have recently acquired funding to support an additional translation into Spanish.
_________________________________________________________________________________________________________
Validity Evidence in Maintenance of Certification (MOC) Assessments
Project Directors: Michael O’Leary (CARPE)
In the United States, Maintenance of Certification (MOC) was created in response to public health research in the 1990s revealing “significant variations in healthcare practices” among physicians, many of which lead to preventable negative patient outcomes (Chung, Clapham, & Lalonde, 2011, p. 3). A critical component of MOC is the cognitive exam, which until recently was typically administered by its respective medical specialty board in a secure environment near the end of a 10-year cycle.
Criticism of medical specialty boards’ Maintenance of Certification (MOC) 10-year exams have spurred the development of shorter, more frequent assessments. These assessment programs, such as MOCA Minute or Knowledge Check-In, aim to reduce examinee burden and provide better alignment to physician practice. But how can we tell if these forms of assessment are “better” than the traditional, 10-year exam? The answer is not straightforward; however, in this research a validity-based framework for addressing this question is proposed, emphasising validity evidence with respect to content, criteria, and consequences. The work was presented to Prometric clients in Baltimore in 2018.
_________________________________________________________________________________________________________
Situational Judgement Tests (SJTs) Project
Project Directors: Anastasios Karakolidis, Michael O'Leary, Darina Scully (CARPE) & Steve Williams (Prometric)
Originating in and most commonly associated with personnel selection, Situational Judgement Tests (SJTs) can be loosely defined as assessment instruments comprised of items that (i) present a job-related situation, and (ii) require respondents to select an appropriate behavioural response to that situation. Traditionally, SJTs are assumed to measure tacit, as opposed to declarative knowledge; or as Wagner and Sternberg (1985) put it: “intelligent performance in real-world pursuits… a kind of ‘street smarts’ that helps people cope successfully with problems, constraints and realities of day-to-day life.” Debate about the precise nature of the construct(s) underlying SJTs persists.
In recent years, the use of SJTS for selection, training and development purposes is increasing rapidly; however, these instruments are still not well understood. Experts continually debate issues such as how SJTs should be developed, and how they should be scored. For example, although it is common to score SJTs based on test-takers' ability to identify the best response to each given situation, it has been argued (e.g., Stemler, Aggarwal & Nithyanand, 2016) that it may be more appropriate to distinguish between test-takers based on their ability to avoid the worst option.
In collaboration with our funders, Prometric, this project investigated the use of an SJT designed using the 'critical incident approach' for the training and development of Prometric employees. Specifically, the project sought to explore validity evidence for the SJT as a measure of successful job performance across two different keying approaches (consensus vs. expert judgement) and five different scoring approaches (match best, match worst, match total, mismatch penalty and avoid total). The findings suggest that scoring approaches focused on the ability to identify the worst response are associated with moderate criterion-related validity. Furthermore, they underline the psychometric difficulties associated with critical incident SJTs. These findings were presented at the European Association of Test Publishers (E-ATP) conference in September 2017 (Noordwijk, Netherlands).
_________________________________________________________________________________________________________
Three vs. Four Option Multiple-Choice Items Project
Project Directors: Darina Scully, Michael O’Leary (CARPE) & Linda Waters (Prometric)
A strong body of research spanning 30+ years suggests that the optimal number of response options for a multiple-choice item is three (one key and two distractors). Three-option multiple choice items require considerably less time to construct and to administer than their four- or five-option counterparts. Furthermore, they facilitate broader content coverage and greater reliability through the inclusion of additional items. Curiously; however, the overwhelming majority of test developers have paid little heed to these factors. Indeed, it is estimated than <1% of contemporary high-stakes assessments contain three-option items (Edwards, Arthur & Bruce, 2012).
This phenomenon has often been commented on, but never satisfactorily explained. It is likely that fears of guessing have played a role, given that chance selection of the correct response theoretically rises from 20% to 25% or 33% when the number of response options is reduced to three. However, distractor analyses across various contemporary high-stakes assessments reveal that more than 90% of four- and five-option items have at least one non-functioning distractor. That is, most of the time, when test-takers need to guess, they do not do so blindly; rather, they eliminate at least one implausible distractor and guess from the remaining options. As such, the majority of four- and five-option items effectively operate as three-option items.
In collaboration with our funders, Prometric, a study comparing item performance indices and distractor functioning (based on responses from more than 1,000 test candidates) across 20 stem-equivalent three-and four-option items from a high-stakes certification assessment was conducted. Findings from the project were disseminated at the Association of Test Publishers (ATP) Conference in March 2017 (Scotsdale, Arizona) and are being used to inform the development of future items for a number of Prometric's examinations.
_________________________________________________________________________________________________________
Higher-Order Thinking in Multiple-Choice Items (HOT MC Items) Project
Project Directors: Darina Scully & Michael O'Leary (CARPE)
The nature of assessment can exert a powerful influence on students’ learning behaviours. Indeed, students who experience assessments that require them to engage in higher-order thinking processes (i.e. those represented by higher levels of Bloom’s (1956) Taxonomy, such as application, analysisand synthesis) are more likely to adopt more meaningful, holistic approaches to future study, as opposed to engaging in mere surface-level or ‘rote-learning’ techniques (Leung, Mok & Wong, 2008). It is often assumed that multiple-choice items are incapable of assessing higher-order thinking; or indeed, anything beyond recall/recognition, given that the correct answer is provided amongst the response options. However, a more correct assertion may be that multiple-choice items measuring higher-order processes are simply rarely constructed. It is true that MC items, like all assessment formats, are associated with some limitations, but it may be possible to construct these items at higher levels, provided certain strategies are followed. MC items remain attractive to and frequently used by educators and test developers due to their objective and cost-efficient nature; as such, it is worthwhile putting time and effort into identifying and disseminating these strategies within the assessment community.
This project involved a comprehensive review of the extant literature that (a) has investigated the capacity of multiple-choice items to measure higher-order thinking or (b) has offered strategies or guidance on how to do so. An article based on this review was published in the peer-reviewed journal Practical Assessment, Research and Evaluation in May 2017, and the work has also contributed to the development of training and development materials for Prometric's test developers
_________________________________________________________________________________________________________
Practice Tests in Large Scale Testing Programmes Project
Project Directors: Anastasios Karakolidis, Darina Scully & Michael O’Leary (CARPE)
This project was focused on developing a research brief reviewing the key findings arising from the literature regarding the efficacy of practice tests. This brief was published in the summer 2017 edition of Clear Exam Review, and the findings are also being used to inform Prometric's practices surrounding the development and provision of practice test materials.
_________________________________________________________________________________________________________
Feedback in Large Scale Testing Programmes Project
Project Directors: Michael O'Leary & Darina Scully (CARPE)
In recent years there is increasing pressure on test developers to provide diagnostic information that can assist unsuccessful test takers improve future performance and assist academic and training institutions in evaluating the success of their programmes and identifying areas that may need to be modified (Haberman & Sinharay, 2010; Haladyna & Kramer, 2004). This growing demand for diagnostic feedback is also evident in the Standards for Educational and Psychological Testing, which states that “candidates who fail may profit from information about the areas in which their performance was especially weak” (AERA, APA & NCME, 2014, p. 176). Test developers face a substantial challenge in attempting to meet this demand, whilst simultaneously upholding their ethical responsibility – also outlined in the Standards – to ensure that any test data that are reported and shared with stakeholders, or used to make educational, certification or licensure decisions are accurate, reliable and valid.
CARPE have conducted a review of the literature on the issues involved in reporting test sub-scores, including the identfication of a number of approaches (e.g., scale anchoring, level descriptors and graphical methods) that can be taken when reporting in large scale testing contexts. These findings are being used to inform Prometric's practices surrounding the provision of feedback to unsuccessful test candidates. _________________________________________________________________________________________________________
Partial Credit for Multiple Choice Items Project
Project Directors: Darina Scully & Michael O’Leary (CARPE)
Multiple-choice test developers have typically shown a strong preference for the use of the single-best answer response format and number-correct scoring. Despite this, some measurement experts have expressed dissatisfaction with these methods, on the basis that they assume a sharp dichotomy between knowledge and lack of knowledge. That is, the entire model fails to take into account the varying degrees of partial knowledge a test-taker may possess on an item-by-item basis. This is regrettable, as information regarding test-takers’ partial knowledge levels may contribute significantly to the estimation of true proficiency levels (DeAyala, 1992).
In response to this criticism, a number of alternative testing models that facilitate the allocation of partial credit have been proposed (e.g., Ben-Simon, Budesco & Nevo, 1997; Frary, 1989; Lau, Lau, Hong & Usop, 2011). Their exact nature varies considerably, but all share the aim of maximizing the information efficiency of individual items, and increasing precision of measurement. CARPE have conducted a literature review focusing on three approaches that facilitate the allocation of partial credit; namely: option-weighted scoring, confidence-weighted responding, and the liberal multiple-choice item format. To date, findings regarding the application of these approaches have been complex and equivocal, with no one method emerging as uniformly superior. Ultimately, whether or not it is worth pursuing these strategies depends on a combination of multiple factors, such as the overall purpose of the assessment, the overall difficulty (pass rate) of the test, the cognitive complexity of the items, and the particular psychometric properties that are most valued by the test developer.
Publications
2022
2021
Scully, D., Lehane, P. and Scully, C. (2021). 'Its no longer scary': digital learning before and during the Covid-19 pandemic in Irish secondary schools. Technology, Pedagogy and Education, 30(1), 159-181. DOI: 10.1080/1475939X.2020.1854844
2020
Abrams, L. Reynolds, K., & O'Leary, M. (2020). Advancing Alignment Arguments in Supporting Scoring
Interpretation and Use Claims of Credentialing Exams. CLEAR Exam Review (30)2, 15-23.
2019
2018
2017
2016
Looney, A., O'Leary, M., Scully, D., & Shiel, G. (2022). Cross-national achievement surveys and educational monitoring in Ireland. In, European Commission, Joint Research Centre, Cross-national achievement surveys for monitoring educational outcomes : policies, practices and political reforms within the European Union, Klinger, D., Volante, L., & Schnepf, S.(Eds.), (pp. 153-176). Luxembourg: Publications Office of the European Union. Download from https://data.europa.eu/doi/10.2760/406165
2022
Cherry, G. and Scully, C. (2022, November). A consideration of factors affecting the use of Automatic Item Generation (AIG) in developing items for use in high-stakes assessments. Paper accepted for presentation at the Association for Educational Assessment Europe (AEA-E) conference, Dublin, Ireland.
Doyle, A., Lysaght, Z. and O’Leary, M. (2022, September). Disturbing the teachers’ role as assessor: The Case of Calculated and Accredited Grades 2020-2021 in Ireland. Paper accepted for presentation at the European Conference on Educational Research (ECER).
Lysaght, Z., Doyle, A. and O’Leary, M. (2022, September). Irish Post-Primary Techers’ Experiences of Assessing their Students for High-Stakes Certification Purposes: Pandemic or Endemic Challenges and Opportunities? Paper accepted for presentation at the European Conference on Educational Research (ECER).
Denner, S., O'Leary, M. and Shiel, G. (2022, November). The impact on the performance of 15-year-olds in Ireland on the PISA reading, mathematics, and science tests when testing occurs at two different periods in the same year (spring vs autumn). Paper accepted for presentation at the Association for Educational Assessment Europe (AEA-E) conference, Dublin, Ireland.
Scully, C. and Cherry, G. (2022, October). Theoretical and practical considerations when adapting performance assessments for remote administration. Paper accepted for presentation at the European Association of Test Publishers (E-ATP) conference, London, United Kingdom.
Chen, M., Cherry, G. and Kuan, L. (2022, October). Examining the suitability of live remote proctoring for language proficiency assessments from a psychometric standpoint. Paper accepted for presentation at the European Association of Test Publishers (E-ATP) conference, London, United Kingdom.
Cherry, G. and O'Leary, M. (2022, August). Live Remote Proctoring and Test Centre Proctoring: The same but different? Paper accepted for presentation at the European Conference for Educational Research (ECER), Yerevan, Amernia.
Scully, C. and Cherry, G. (2022, April). Practical and Theoretical Concerns when Administering Remote Performance Assessments. Paper presented at the Irish Educational Studies Association of Ireland (ESAI) conference, Dublin, Ireland.
O'Leary, M., Lysaght, Z. and Doyle, A. (2022, April). Irish Post-Primary Teachers Feelings and Beliefs about Assessment following the 2021 Accredited Grades Process. Paper presented at the Irish Educational Studies Association of Ireland (ESAI) Conference, Dublin, Ireland.
Lehane, P., O'Leary, M. and Scully, D. (2022, April). Exploring Irish post-primary students’ interactions with computer-based exams. Paper presented at the Irish Educational Studies Association of Ireland (ESAI) Conference, Dublin, Ireland.
Kuan, L., Chen, M., Cherry, G., O'Leary, M. and Zumbo, B. (2022, March). Comparability of High-Stakes Exams in Test Centres Proctored and Live Remote Proctoring: A Multimethod Psychometric Investigation across Multiple Testing Programmes. Paper presented at the Association of Test Publishers (ATP) conference, Orlando, United States.
Cherry, G. (2022, April). Inequalities in Educational Attainment across Rural and Urban Locations of Northern Ireland. Paper accepted for a Roundtable Discussion at the American Educational Research Association (AERA) conference, San Diego, United States.
Cherry, G., O'Leary, M., Kuan, L. and Waters, L. (2022, April). Comparing Outcomes from Examinations Proctored in Test Centres and Online using Live Remote Proctoring Technology. Paper accepted for presentation at the American Educational Research Association (AERA) conference, San Diego, United States.
Lehane, P., O'Leary, M. and Scully, D. (2022, April). Understanding Simulation-Type Items using Eye Movement and Log-File Data. Paper accepted for presentation at the American Educational Research Association (AERA) conference, San Diego, United States.
Lehane, P. (2022, April). Items in Technology-Based Assessments: Examining the Use of Multimedia Stimuli with Post-Primary Test Takers. Paper accepted for a Roundtable Discussion at the American Educational Research Association (AERA) conference, San Diego, United States.
Costello, E., Brown, M., Butler, D., Girme, P., Kaya, S., Kirwan, C., McLoughlin, E. and O'Leary, M. (2022, April). Assessment of Transversal Skills in STEM: From theory to practice in a large scale research project. Paper accepted for presentation at the annual Society for Information Technology and Teacher Education (SITE) conference, San Diego, United States.
2021
O'Leary, M. Assessment and Student Wellbeing: Perspectives from a High-Stakes Assessment Context in Ireland. Keynote address at the 11th Lifelong Learning Week, Lifelong Learning Platform. (2021, November).
Cherry, G., O'Leary, M., Kuan, L., Waters, L. and Gilvarry, E. A Comparison of Outcomes across Tests Taken in Test Centres and Via Live Remote Proctoring (LRP). Paper presented at the European Association of Test Publishers (E-ATP) Virtual Conference. (2021, September).
Lehane, P., Scully, D. and O'Leary, M. Items in Technology Based-Assessments: Examining the use of multimedia stimuli with eye movement data. Paper presented at the European Association of Test Publishers (E-ATP) Virtual Conference. (2021, September).
Doyle, A., Lysaght, Z. and O'Leary, M. High-Stakes Exams in the Time of Covid-19. The Experiences of Irish Post-Primary Teachers. Paper presented at the European Conference for Educational Research (ECER Network 09: Assessment, Evaluation, Testing and Measurement), Geneva, Switzerland. (2021, September).
Lysaght, Z. and Cherry, G. Standardised Testing in English Reading and Mathematics in Irish Primary Schools: Trends over Time. Paper presented at the Association for Educational Assessment Europe (AEA-E) Virtual Conference. (2021, November).
Cherry, G., O'Leary, M., Kuan, L. and Waters, L. A Comparison of Outcomes from Tests Proctored Locally in Testing Centres and Online using Live Remote Proctoring (LRP). Poster presented at the Association for Educational Assessment Europe (AEA-E) Virtual Conference. (2021, November).
Scully, C. Examiner consistency in high-stakes performance assessments in the health sciences. Roundtable at the European Association of Test Publishers (E-ATP) Virtual Conference. (2021, September).
Scully, C. Assessor cognition as a means of improving the reliability of nursing Objective Structured Clinical Examinations (OSCEs). Paper presented at the Dublin City University (DCU) Unconference. (2021, June).
Lehane, P., Pitsia, V. and Karakolidis, A. Identifying factors predicting teachers' use of assessment data: Findings from a national large-scale survey of primary teachers in Ireland. Paper presented at the Virtual European Conference on Educational Research (ECER). (2021, September).
Lehane, P., Scully, D. and O'Leary, M. Exploring primary school teachers' use of assessment data in an Irish context - A secondary analysis of survey data. Paper presented at the Virtual European Conference on Educational Research (ECER). (2021, September).
Scully, D., Crosbie, N., O'Brien, N. and O'Leary, M. "What do you think Jessica should do?": An innovative tool to support the formative assessment of Junior Cycle students' knowledge and understanding of wellbeing. Paper presented at The SPHE Network's 5th Conference, Institute of Education, DCU. (2021, November).
Scully, D., Crosbie, N., O'Brien, N. and O'Leary, M. "What do you think Jessica should do?": An innovative tool to support the formative assessment of Junior Cycle students' knowledge and understanding of wellbeing. Paper presented at the Association for Educational Assessment - Europe (AEA-E) Virtual conference. (2021, November).
Lehane, P. Items in technology-based assessments: Examining the use of multimedia stimuli with post-primary test-takers. Paper presented at the Association for Educational Assessment - Europe (AEA-E) Virtual conference. (2021, November).
2020
Kuan, L., O'Leary, M. & Brunner, B. (2020). Is the Psychometric Quality of Situational Judgment Items Affected by the Type of Task Analysis Performed? Paper accepted for presentation at the Association of Test Publishers (ATP) Conference, San Diego, United States. (Conference cancelled).
Lehane, P., Lysaght, Z. & O'Leary, M. (2020) The Interview as a Selection Mechanism for Entry into Initial Teacher Education: A Review of the Literature and Recommendations for Practice. Paper accepted for presentation at the Educational Studies Association of Ireland, Dublin, Ireland. (Conference cancelled).
Lehane, P., Pitsia, V., & Karakolidis, A. (2020). Identifying factors predicting teachers’ use of assessment data: Findings from a national large-scale survey of primary teachers in Ireland. Paper accepted for presentation at the European Conference on Educational Research 2020, Glasgow, UK. (Conference cancelled).
Ling, G., O'Leary, M. et al. (2020). Assessment of Competencies as a Result of College Learning: Explorations in Europe and Beyond. Symposium accepted by Division 9 (Assessment, Evaluation, Testing and Measurement) of the European Conference on Educational Research 2020, Glasgow, UK. (Conference cancelled)
O'Leary, M., Lysaght, Z., NicCraith, D. & Scully D. (2020). Teacher Perspectives on Standardised Testing of Achievement in Ireland. Paper accepted for presentation at the Annual Meeting of the American Educational Research Association (AERA), San Francisco, United States. (Conference cancelled).
O'Leary, M. & Reynolds, K. (2020). Critical Thinking Test Validity in International Contexts: Evaluating the ETS HEIghten Test for Irish Use. Paper accepted for presentation at the Annual Meeting of the American Educational Research Association (AERA), San Francisco, United States. (Conference cancelled).
Pitsia, V., Lysaght, Z., O’Leary, M., & Shiel, G. (2020). A multilevel binary logistic regression analysis of mathematics and science achievement in TIMSS 2015 in an Irish post-primary context. Paper accepted for presentation at the European Conference on Educational Research 2020, Glasgow, UK. (Conference cancelled).
Pitsia, V. (2020). Characteristics of high achievers: A multilevel logistic regression analysis of PISA mathematics and science data. Paper accepted for presentation at the American Educational Research Association (AERA) 2020 Annual Meeting, San Francisco, United States. (Conference cancelled).
Pitsia, V., Lysaght, Z., Shiel, G., & O’Leary, M. (2020). Are we meeting the needs of high achievers? A closer look at PISA, TIMSS and PIRLS data for Ireland. Paper accepted for presentation at the Educational Studies Association of Ireland Conference 2020, Dublin, Ireland. (Conference cancelled).
Reynolds, K., Abrams, L. & O'Leary, M. (2020). Applying the Webb Alignment Model to Professional Testing. Paper accepted for presentation at the Association of Test Publishers (ATP) Conference, San Diego, United States. (Conference cancelled).
Scully, C. (2020). Inter-rater reliability and validity in the Objective Structured Clinical Examination. Paper accepted for presentation at the Educational Studies Association of Ireland, Dublin, Ireland. (Conference cancelled).
2019
Abrams, L., Lysaght, Z., & O'Leary, M. (2019). The context and use of standardized testing data for educational decision-making in Ireland. Paper presented at the Annual Meeting of the American Educational Research Association conference. Canada: Toronto.
Pitsia, V., O’Leary, M., Shiel, G., & Lysaght, Z. (2019, November). What do international large-scale assessments tell us about high achievement in mathematics and science, with specific reference to Ireland and some comparison countries? Paper presented at the 20th Annual Meeting of the Association for Educational Assessment- Europe (AEA-E). Portugal: Lisbon.
Clifford, I., & Karakolidis, A. (2019, September) The challenges and opportunities of including animated items in licensure examinations: Insights from two research studies. Paper presented at the European Association of Test Publishers (E-ATP) Conference, Madrid, Spain.
Karakolidis, A. (2019, April). The use of animations in assessment: Comparing an animated and a text-based situational judgment test. Paper presented at the American Education Research Association (AERA) Annual Meeting. Toronto, Canada.
Karakolidis, A., O'Leary, M., & Scully, D. (2019). Enhancing assessment validity through the use of animated videos: An experimental study comparing text-based and animated situational judgements tests. Paper presented at the 20th annual Association for Educational Assessment – Europe (AEA) conference. Lisbon, Portugal.
Lehane, P. (2019, November). Designing Digital Assessments: Factors to Consider when Developing Computer-Based Exams. Paper presented at the World Conference of Online Learning (WCOL) conference, Dublin, Ireland.
Lehane, P. (2019, September). Considering ‘Emotional Intelligence’ in Certification and Licensure Testing. Paper presented at the European Association of Test Publishers (E-ATP) Conference, Madrid, Spain.
Lehane, P. (2019, April). Assessment 2.0: Factors to consider when developing technology-based assessment. Paper presented at the Educational Studies Association of Ireland (ESAI) conference, Sligo, Ireland.
Lehane, P. & Kuan, L. (2019, March). Finding a suitable task analysis for developing competency-based statements. Paper presented at the Association of Test Publishers (ATP) Conference, Orlando, United States.
Lehane, P. & Waters, L. (2019, March). What makes a difference for candidates taking computer-based tests? The impact of devices and user interface tools. Paper presented at the Association of Test Publishers (ATP) Conference, Orlando, United States.
O'Leary, M. & Champagne, K. (2019, March). Alternative and high-fidelity item types. Where are we now with technology-based items? Paper presented at the Association of Test Publishers (ATP) Conference, Orlando, United States.
O’ Leary, M., Scully, D. & Lehane, P. (2019, September). Considering emotional intelligence in certification and licensure testing. Paper presented at the Association of Test Publishers (ATP) Public Sector Special Interest Group Conference, Vienna, Austria.
Pitsia, V., Karakolidis, A., & Shiel, G. (2019, June). High achievement in mathematics and science: A multilevel analysis of TIMSS 2015 data for Ireland. Paper presented at the 8th International Association for the Evaluation of Educational Achievement (IEA) International Research Conference (IRC-2019), Copenhagen, Denmark.
Scully, D. (2019, January). The Learning Portfolio in Higher Education: A Game of Snakes and Ladders? Paper presented at the Middle East and Africa Association of Test Publishers (MEA-ATP) Conference, Abu Dhabi, United Arab Emirates.
Scully, D. & Kuan, L. (2019, January). Providing useful and effective diagnostic feedback. Paper presented at the Middle East and Africa Association of Test Publishers (MEA-ATP) Conference, Abu Dhabi, United Arab Emirates.
Scully, D., O'Leary, M. & Lehane, P. (2019, September). Considering emotional intelligence in certification and licensure testing. Paper presented at the European Association of Test Publishers (E-ATP) Conference, Madrid, Spain.
2018
Karakolidis, A. & Corrigan, R. (2018, February). Is a picture worth a thousand words? Using animated items in certification testing. Paper presented at the Association of Test Publishers Conference, San Antonio, TX.
Karakolidis, A. & Pitsia, V. (2018, September). Transforming Situational Judgement Tests through the use of animated simulations. Paper presented at the European Association of Test Publishers (E-ATP) Conference, Athens, Greece.
Lehane, P. (2018, September). Device Comparability – Does it matter what device is used to administer assessments? Paper presented at the European Association of Test Publishers (E-ATP) Conference, Athens, Greece.
Lysaght, Z. & Scully, D. (2018, September). Providing Diagnostic Feedback to Non-Successful Test Candidates. Paper presented at the Association of Test Publishers (ATP) Public Sector Special Interest Group Conference, Brussels, Belgium.
Scully, D. & Kuan, L. (2018, September). Identifying the Task Analysis of Best Fit. Paper presented at the European Association of Test Publishers (E-ATP) Conference, Athens, Greece.
Scully, D. & Lysaght, Z. (2018, September). Diagnostic Feedback – Dos and Don’ts. Paper presented at the European Association of Test Publishers (E-ATP) Conference, Athens, Greece.
Scully, D., O’Leary, M. & Brown, M. (2018, August). Key Findings from a Literature Review on the Use of Learning Portfolios (ePortfolios) in Higher Education. Paper presented at the World Education Research Association (WERA) World Congress, Cape Town, South Africa.
Scully, D., O'Leary, M. & Brown, M. (2018, June). The Learning Portfolio in Higher Education: An Integrative Review. Paper presented at the EdMedia + Innovate Learning International Conference, Amsterdam, Netherlands.
Scully, D., O'Leary, M. & Brown, M (2018, May). A Game of Snakes and Ladders: An Integrative Review of Learning Portfolio in Higher Education. Paper presented at the Irish Learning Technology Association EdTech Conference, Carlow IT.
Scully, D., O'Leary, M. & Brown, M. (2018, May). The Learning Portfolio in Higher Education: A Game of Snakes and Ladders? Paper to be presented at the CRA/AAEEBL International Seminar, ePortfolios and more: the developing role of ePortfolios within the digital landscape, Dublin City University.
Scully, D. & Ridgley, K. (2018, February). Competencies - What are they and how best to we define them to measure them? Paper presented at the Association of Test Publishers Conference, San Antonio, TX.
Stemler, S., Elliott, J., O'Leary, M., Scully, D., Karakolidis, A. & Pitsia, V. (2018, April). A Cross-Cultural Study of High School Teachers' Tacit Knowledge of Interpersonal Skills. Paper presented at the Annual Meeting of the American Educational Research Association (AERA), New York, NY.
Kent, G., & Pitsia, V. (2018, December). Gender differences in cognitive development and school readiness: Findings from a randomised controlled trial of children from communities of socio-economic disadvantage in Ireland. Paper presented at the Annual conference of the Children's Research Network 2018, Dublin, Ireland.
Kent, G., Pitsia, V., & Colton, G. (2018, November). Preparing for the transition to primary school; predictors of school readiness behaviours of five-year-old children from areas of socio-economic disadvantage. Paper presented at the 48th Annual Conference of the Psychological Society of Ireland, Wexford, Ireland.
Pitsia, V., Karakolidis, A., Sofianopoulou, C. & Emvalotis, A. (2018, October). Risk factors for early school leaving in Greece. Paper presented at the ESA/RN27 Mid-term Conference 2018, Catania, Italy.
Pitsia, V., & Kent, G. (2018, November). Variations in perceptions of five-year-old children’s school readiness among parents and teachers. Paper presented at the 11th annual International Conference of Education, Research and Innovation (pp. 8426-
8435)
2017
O'Leary, M., Scully, D. & Karakolidis, A. (2017, September). Refining Situational Judgement Tests. Paper presented at the European Association of Test Publishers Conference, Noordwijk, The Netherlands.
O'Leary, M. Scully, D. & Karakolidis, A. (2017, September). Challenging a Tenet of Multiple Choice Testing: Are Four Response Options Really Necessary? Paper presented at the European Assoication of Test Publishers Conference, Noordwijk, The Netherlands.
Gurhy, AM (2017, April). Using assessment for learning to enhance mathematics education in the primary school: A lesson study approach. Paper presented at the Annual Conference of the Educational Studies Association of Ireland Conference, University College Cork.
Karakolidis, A., Scully, D. & O'Leary M. (2017, March). Simulations in Assessment and the Uncanny Valley: Too true to be good? Paper presented at the International TEchnology Education and Development (INTED) Conference, Valencia.
Scully, D. & Waters, L. (2017, March). Driving Program Decisions Through Research: Applying Best Practices. Paper presented at the Association of Test Publishers Conference, Scottsdale, AZ.
Gurhy, AM (2017, February). Using assessment for learning to enhance mathematics education in the primary school: Irish students' perspectives. Paper presented at the 10th Congress of European Research in Mathematics Education (CERME 10), Croke Park, Dublin.
2016
Lysaght, Z. & O'Leary, M. (2016, August). Developing Assessment Capacity in Norway and Ireland using the Assessment for Learning Audit Instrument (AfLAi). Paper presented at the Annual Meeting of the European Conference for Education Research (ECER), Dublin.
O'Leary, M. & Lysaght, Z. (2016, April). Using the Assessment for Learning Audit Instrument to inform Data-driven Professional Development. Paper presented at the Annual Meeting of the American Educational Research Association (AERA), Washington, D.C.
Darmody, M. (2016, September). Post-Primary Teachers’ Conceptions of Assessment. Paper presented at the NCCA Assessment Research Showcase, Dublin.
2022
O'Leary, M., Cherry, G. and Scully, C. (2022). Theoretical and Practical Considerations for the Development and Validation of a Hazard Test for Use in Driver Licensing in Ireland. Dublin: Centre for Assessment Research, Policy and Practice in Education (CARPE), Dublin City University.
Cherry, G. and Scully, C. (2022). A consideration of the use and intended outcomes of incorporating gamification in assessment and learning environments. Dublin: Centre for Assessment Research, Policy and Practice in Education (CARPE), Dublin City University.
Scully, C. and Cherry, G. (2022). A White Paper on theoretical and practical considerations for the administration of remote Objective Structured Clinical Examinations (OSCEs). Dublin: Centre for Assessment Research, Policy and Practice in Education (CARPE), Dublin City University.
Cherry, G. and Scully, C. (2022). A Brief on Security Protocols for Live Remote Proctoring. Dublin: Centre for Assessment Research, Policy and Practice in Education (CARPE), Dublin City University.
Cherry, G. and Scully, C. (2022). A consideration of factors affecting the use of automatic item generation (AIG) in developing items for use in certification and licensure assessments. Dublin: Centre for Assessment Research, Policy and Practice in Education (CARPE), Dublin City University.
2021
2020
Scully, C. and Lehane, P. (2020). An examination of recent literature on the validity and reliability of outcomes from remote proctored assessments and on candidate experience of taking remote proctored tests. Dublin: Centre for Assessment, Research, Policy and Practice in Education (CARPE), Dublin City University.
Lehane, P. and Scully, C. (2020). Considerations for policies, procedures and regulations (including legal regulations) when using remote proctoring for online licensure and certification tests. Dublin: Centre for Assessment, Research, Policy and Practice in Education (CARPE), Dublin City University.
2019
Abrams, L., Morgan, M., & O’Leary, M. (2019). The Measurement of Non-Cognitive Factors Influencing Educational and Workplace Performance. Unpublished manuscript, Centre for Assessment Research Policy & Practice in Education (CARPE), Dublin City University.
Abrams, L., Reynolds, K., & O’Leary, M. (2019). The Application of Webb’s Model of Alignment to Enhance Credentialing Exam Design and Documentation of Score Interpretation and Use Claims. Unpublished manuscript, Centre for Assessment Research Policy & Practice in Education (CARPE), Dublin City University.
Szendey, O., & O’Leary, M. (2019). Automated Item Generation. Unpublished manuscript, Centre for Assessment Research Policy & Practice in Education (CARPE), Dublin City University.
Szendey, O., & O’Leary, M. (2019). Computer Adaptive Testing. Unpublished manuscript, Centre for Assessment Research Policy & Practice in Education (CARPE), Dublin City University.
2018
Abrams, L. (2018). Best Practice in Test Design: Test Content Specification Procedures for Achievement and Credentialing Exams. Unpublished manuscript, Centre for Assessment Research Policy & Practice in Education (CARPE), Dublin City University.
Lehane, P. (2018). Device Comparability: Administering the same test using different device types: A review of the literature and recommendations for practice. Unpublished manuscript, Centre for Assessment Research Policy & Practice in Education (CARPE), Dublin City University.
Lehane, P. & Karakolidis A. (2018). Items in Technology-Based Assessment: A review of the literature and recommendations for practice. Unpublished manuscript, Centre for Assessment Research, Policy & Practice in Education (CARPE), Dublin City University.
Lehane, P. (2018). User Interface Tools: A review of the literature and recommendations for practice. Unpublished manuscript, Centre for Assessment Research, Policy & Practice in Education (CARPE), Dublin City University.
Reynolds, K. (2018). Short Assessments and Their Applications: Maintenance of Certification and Micro-Credentials. Unpublished manuscript, Centre for Assessment Research, Policy & Practice in Education (CARPE), Dublin City University.
Scully, D. & Lehane P. (2018). Considering Emotional Intelligence in Certification and Licensure Testing. Unpublished manuscript, Centre for Assessment Research Policy & Practice in Education (CARPE), Dublin City University.
2017
O'Leary, M. & Scully D. (2017). Providing effective feedback in the context of certification and licensure testing with particular reference to non-successful test candidates. Unpublished manuscript, Centre for Assessment Research, Policy & Practice in Education, Dublin City University.
Scully, D. (2017). Reducing Gender Bias in Tests and Test Items. Unpublished manuscript, Centre for Assessment Research, Policy & Practice in Education, Dublin City University.
Karakolidis, A., O’Leary, M. & Scully, D. (2017). The development and provision of practice tests: A research brief outlining key findings and recommendations for best practice. Unpublished manuscript, Centre for Assessment Research, Policy & Practice in Education, Dublin City University.
Karakolidis, A., O’Leary, M. & Scully, D. (2017). Online Proctoring: A research brief outlining current knowledge of an emerging practice. Unpublished manuscript, Centre for Assessment Research, Policy & Practice in Education, Dublin City University.
Scully, D. & Lehane P. (2018). Considering Emotional Intelligence in Certification and Licensure Testing. Unpublished manuscript, Centre for Assessment Research Policy & Practice in Education (CARPE), Dublin City University.
2016
Scully, D. (2016). Awarding Partial Credit in Multiple-choice Testing. Unpublished manuscript, Centre for Assessment Research, Policy & Practice in Education, Dublin City University.
Scully, D. (2016). Some Recommendations for Item Development. Unpublished manuscript, Centre for Assessment Research, Policy & Practice in Education, Dublin City University.
Karakolidis, A. (2016). Simulations: Fidelity and the Uncanny Valley. Unpublished manuscript, Centre for Assessment Research, Policy & Practice in Education, Dublin City University
Scully, D. (2016). Constructing MC items at higher levels of cognitive complexity for certification and licensure tests. Unpublished manuscript, Centre for Assessment Research Policy & Practice in Education, Dublin City University.
Scully, D. (2016). Organizing certification or licensure tests by competency statements: a review of the literature and recommendations for practice. Unpublished manuscript, Centre for Assessment Research Policy & Practice in Education, Dublin City University.
Scully, D. (2016). A review of empirical research and theory on the use of situational judgement items and recommendations for best practice. Unpublished manuscript, Centre for Assessment Research Policy & Practice in Education, Dublin City University.
Scully, D. (2017). Reducing Gender Bias in Tests and Test Items. Unpublished manuscript, Centre for Assessment Research, Policy & Practice in Education, Dublin City University.
Lysaght, Z., O'Leary, M. and Ludlow, L. (2021). An instrument for measuring Assessment for Learning (AfL) in the classroom. Research Outreach. Available at: https://researchoutreach.org/articles/instrument-measuring-assessment-learning-afl-classroom/
Resources
CARPE have compiled an annotated list of research and commentaries on assessment conducted in Ireland from 2000-present, which may be a helpful resource for those interested in or pursuing research on the topic of assessment. Peer-reviewed articles, reports and doctoral theses are included in this repository, with the most recent published work cited first. This list will be updated continually, and we invite suggestions for any additional research to be included.
This chapter begins with an overview of how compulsory schooling in Ireland is organised. Ireland's patterns of performance across various international surveys of achievement at primary and post-primary levels as well as adult literacy studies are then described and evaluated. Although the analysis makes it clear that, in general, between 1995 and 2018, Irish students performed well in comparison with their international counterparts, Programme for International Student Assessment (PISA) 2009 was an exception. At the time, the term 'PISA shock' was used to describe a set of poor results that challenged prior notions about high standards of achievement in Ireland and accelerated policy change. As we discuss, the most prominent example of political reform in response to International survey results in Ireland was the introduction of the 2011-2022 national literacy and numeracy strategy by the Department of Education and Skills in 2011. The strategy set out a number of planned actions, and the outcomes of these actions are evaluated towards the end of the chapter. The chapter concludes by considering issues pertinent to the usefulness of cross-national achievement surveys in supporting the process of educational monitoring in Ireland.
National policy initiatives in Ireland, such as Project Maths and the Science, Technology, Engineering, and Mathematics (STEM) Education Policy Statement have sought to increase the engagement and performance of students in mathematics and science. The current study investigated the performance of students in Ireland in these areas and in reading in international large-scale assessments (Programme for International Student Assessment [PISA], Trends in International Mathematics and Science Study [TIMSS], and Progress in International Reading Literacy Study [PIRLS]), with a view to better understand the performance of high-achieving students, relative to their counterparts in other countries in general, and in countries with similar average performance. Lower than expected proportions of high achievers were noted and a pattern of relative underachievement was observed among high achievers – those performing at the highest levels of proficiency, and those performing at key benchmarks, including the 75th and 90th percentiles – in mathematics and science to a greater extent than in reading. These issues were found to be consistent over time, and more prevalent at post-primary level, compared with primary level. The findings of this study are discussed with reference to individual and societal costs, and specific issues that need to be investigated further are identified.
In line with the widespread proliferation of digital technology in everyday life, many countries are now beginning to use computer-based exams (CBEs) in their post-primary education systems. To ensure that these CBEs are delivered in a manner that preserves their fairness, validity, utility and credibility, several factors pertaining to their design and development will need to be considered. This research study investigated the extent to which the design of different types of test items (e.g. inclusion of multimedia stimuli) in a CBE can affect test-takers’ engagement and behaviour. Qualitative data from a cued-Retrospective Think Aloud (c-RTA) protocol were gathered from 12 participants who had participated in a previous eye-tracking study. Participants watched a replay of their eye movements and were asked to state out loud what they were thinking at different points of the replay. Thematic analysis of the responses from these cognitive interviews captured the nature of students’ interactions with online testing environments under three main themes: Familiarisation, Sense-making and Making Decisions. Students also provided their opinions of and recommendations for the future of Irish online assessments. These findings can offer guidelines to all stakeholders considering the use of CBEs in post-primary contexts.
Save for the 2009 Programme for International Student Assessment (PISA) shock, Ireland has recorded strong average scores in mathematics, science, and reading on national and international large-scale assessments. Despite this, percentages of high achievers in mathematics and science in these assessments have remained stubbornly lower than those of some countries with average performance similar to that of Ireland. Reflecting the multifaceted benefits to individuals and society of knowledge and skills in science, technology, engineering, and mathematics (STEM), increasing prioritisation in educational policy in Ireland of high achievers in mathematics and science over the past decade in particular is not unexpected, albeit this was not always the case. This paper offers a chronology of Irish educational policy documents since 1995 illuminating why, when, and how high achievement in mathematics and science has emerged as a key component and driver of educational policy reform.
Evidence suggests that the quality of teachers’ instructional practices can be improved when these are informed by relevant assessment data. Drawing on a sample of 1,300 primary school teachers in Ireland, this study examined the extent to which teachers use standardized test results for instructional purposes as well as the role of several factors in predicting this use. Specifically, the study analyzed data from a cross-sectional survey that gathered information about teachers’ use of, experiences with, and attitudes toward assessment data from standardized tests. After taking other teacher and school characteristics into consideration, the analysis revealed that teachers with more positive attitudes toward standardized tests and those who were often engaged in some form of professional development on standardized testing tended to use assessment data to inform their teaching more frequently. Based on the findings, policy and practice implications are discussed.
This paper provides a perspective on the manner in which Irish post-primary teachers interpreted and implemented a set of guidelines created by the Department of Education and Skills (DES) in Ireland when faced with the cancellation of the traditional high stakes Leaving Certificate (LC) examination due to COVID-19. Subject teachers were asked to engage with a system of calculated grades whereby they would estimate a percentage mark and a class rank for each of their students before meeting with school colleagues to agree a final set of data to be submitted for national standardisation. This was a remarkable event in Irish education as teachers had never before been directly involved in assessing their own students for certification purposes. Data from a survey conducted with teachers (n = 713) show that a wide variety of evidence was used to support their judgements and that the DES guidelines were not always implemented as intended. Challenges highlighted in the paper include decision making around grade boundaries, the lack of evidence for newer subjects, negotiating with school colleagues, and anticipating the impact of national standardisation. The study findings will be of interest to future initiatives involving the professional judgement of teachers in high stakes contexts.
Primary schools in Ireland are required to administer standardised tests in English reading and Mathematics in second, fourth and sixth classes, and to report the aggregated results to their boards of management and the Department of Education and Skills (DES). Since September 2017, results are used at national level as part of the process of determining the allocation of special educational teaching resources to schools. Schools are also required to share results with parents in written format using end-of-year school reports at the three mandatory testing points. The international literature on standardised testing suggests that when test scores are shared widely and used for purposes beyond internal planning, the associated sense of accountability can lead to a culture of teaching to the test and narrowing of the curriculum. Although the stakes associated with standardised testing in Irish primary education remain relatively low, recent policy changes have increased the focus on these instruments. In the wake of these changes, a survey of Irish primary teachers was conducted. A collaborative effort by the Centre for Assessment Research, Policy and Practice in Education (CARPE) and the Irish National Teachers' Organisation (INTO), the research aimed to gather information about how standardised tests are used and how teachers feel about them. In this paper, data with respect to teachers' beliefs about, and attitudes to, standardised testing are foregrounded and reveal an interesting diversity of opinion with study participants being neither wholly supportive nor wholly opposed to the practice.
The study at the heart of this paper was conducted in 2017 to gather data on Irish post-primary teachers’ conceptions of assessment at the time immediately following the introduction of a revised policy for assessment by the Department of Education and Skills (DES). Central to the reform policy was an increased emphasis on formative assessment and a requirement that teachers engage in summative assessment for certification purposes – something that had never applied previously. The paper provides an overview of the literature on teachers’ beliefs, including Brown’s (2004, 2006) Teachers’ Conceptions of Assessment Inventory. Data from an implementation of the inventory with 489 Irish teachers are used to consider how they conceive of assessment, how these conceptions compare and contrast with those held by teachers in other jurisdictions where the instrument has been used and how the data might be used to inform policy change and implementation in Ireland.
This discussion paper provides an outline of how stakeholders across the Irish higher education sector were supported to develop a national understanding of assessment and feedback through local and national conversations. Within the context of a national enhancement initiative and utilising a conversational approach, a collective effort was made to provide clarity around concepts related to assessment and feedback in Irish higher education. The approach taken and the perspectives that emerged from these conversations, along with the resulting national understanding, are presented and discussed with reference to related literature.
In this paper, the authors use a critical policy historiography approach to explore the impact of globalisation forces on tw relatively recent curriculum reforms - the Australian Curriculum and Ireland’s Framework for Junior Cycle. Both reforms employ triadic models of curriculum design involving subjects/learning areas, key skills/general capabilities, statements of learning/cross-curriculum priorities. Globalization influences are clearly evident in the shared emphases in both jurisdictions on skills, learning and school/teacher agency. However, these reforms have inevitably been shaped by their respective local political and social contexts and the respective curriculum debates have been dominated by technical implementation issues such as curriculum overload in Australia and school-based versus external assessment in Ireland. The paper offers a good overview of both education systems.
This discussion paper provides an outline of how stakeholders across the Irish higher education sector were supported to develop a national understanding of assessment and feedback through local and national conversations. Within the context of a national enhancement initiative and utilising a conversational approach, a collective effort was made to provide clarity around concepts related to assessment and feedback in Irish higher education. The approach taken and the perspectives that emerged from these conversations, along with the resulting national understanding, are presented and discussed with reference to related literature.
This in-depth study on the use of standardised tests in Mathematics and English Reading in Irish primary schools was led by the Centre for Assessment Research, Policy and Practice in Education (CARPE) at DCU and the Irish National Teachers’ Organisation (INTO). It was the first large scale investigation of Irish primary teachers’ use of, beliefs about and attitudes to the tests, with the views of 1564 respondents recorded. The research suggests that while standardised tests bring value over and above other assessments and ultimately provide information which is essential in broadening the focus of decision-making about teaching and learning within classrooms and across schools, unintended negative consequences can also result from their use.
This case study investigates factors influencing efforts to introduce school-based assessments (SBAs) in lower secondary education in the Republic of Ireland and reactions from the main stakeholders. Policymakers’ perspectives were informed by national consultations, results of international assessments, trends towards skills-based curricula and practices in relation to SBA as part of high-stakes assessment internationally. Despite broad enthusiasm for the reforms from most stakeholders, teachers remained opposed. A series of compromise proposals shifted the reforms far from their intended nature, leaving in place a dual system of assessment that incorporates continued centralised examining by the state along with some non-certified SBA by teachers. The efficacy of this solution in relation to the original aims of the reform remains to be seen. The analysis explores relevant substantive and methodological issues. The complex interplay between international, national and very local influences on policy implementation is highlighted, suggesting the need for due diligence in anticipating and managing stakeholder responses to reform initiatives. Readers’ attention is also drawn to the intricacy of undertaking qualitative case study inquiry and the need for awareness in relation to possible alternative interpretations of data.
The terminal examination of post-primary education in Ireland, the Leaving Certificate, is often criticised for the reliance on memory recall over higher order thinking skills in the assessment process. In order to examine the evidence base for these critiques, this article presents an empirical investigation of the intellectual skills and knowledge domains implicit in the tasks in the written examination papers of 23 subjects in the Leaving Certificate in Ireland from 2005 to 2010. Data were collected from two sources: examination papers and student interviews. In an in-depth document analysis of the examination papers, 14,910 occurrences of command verbs were coded for the intellectual skill and knowledge domains required by the assessment task. As the same verb can require different intellectual skills in different subjects and in different tasks, each occurrence of every verb was assigned a specific value depending on its context. The article presents the frequencies and distributions of intellectual skills and knowledge domains within and across subjects. In light of key points in the literature search, the findings indicate concern regarding the level of challenge and stimulation for the development of students of the Leaving Certificate.
This paper explores the Leaving Certificate Physical Education curriculum development process. Ten interviews with members of the Physical Education Development Group were interviewed. The results revolved around curriculum content knowledge, assessment weightings, and debating the responsibility for assessing students’ work.
Self-assessment practices have been advocated in recent Irish educational documents due to their potential to enhance school children’s learning and self-regulatory skills. However, the literature has highlighted how some children struggle to make accurate self-assessments of their academic work, which diminishes such positive effects (Keane and Griffin 2015; Nicol 2009). Using Piaget’s Theory of Cognitive Development (1970) as a theoretical framework, the present study sought to investigate whether children’s academic self-assessments became more accurate in line with increased age and higher prior literacy attainment. Following training in the use of self-assessment writing rubrics, 85 school children from second class, fifth class and Transition Year wrote an English essay and later self-assessed their work using rubrics devised by Andrade, Du, and Wang (2008). Results indicated that overall, children’s self-assessment scores held a weak relationship with their actual performance scores (r = .24). However, findings illustrated that children’s self-assessments became significantly more accurate in line with increased developmental stages. Strong correlations also emerged between higher prior literacy attainment and children’s accuracy in self-assessments, amongst second class (r = −.45) and fifth class (r = −.73) children only. The findings suggest that Irish school children, in particular, primary school children with low literacy attainment, display difficulty making accurate self-assessments of their academic work in literacy. Stemming from the research, implications for practice and future research directions are outlined.
The more recent discussion in Ireland around post-primary teachers being responsible for assessing their own students’ work continues. The new junior cycle reform (covering the first three years of post-primary education) is concerned with making fundamental changes in approaches to learning, teaching, curriculum and assessment, with school-based assessment as an important element of the reform. This paper sets out to map assessment policy in a changing and contested assessment environment in the Republic of Ireland. The paper tells the story of assessment in junior cycle from the first progress report in 1999 on a review of the curriculum that had been introduced for students in the junior cycle of post-primary schools in 1989 to the 2015 Framework for Junior Cycle. We document the intention to move away from assessment as solely a means of making summative judgements towards assessment as a support of learning and teaching.
Darmody, M. (2017) Irish post-primary teachers’ conceptions of assessment. Doctor of Education thesis, Dublin City University.
The purpose of this research was to elicit baseline data about Irish post-primary teachers’ conceptions of assessment. Post-primary education in the Republic of Ireland is currently in the midst of significant curriculum and assessment reform at Junior Cycle, the first three years of the secondary school system. Central to this change is the positioning of the teacher at the heart of the assessment process. The successful implementation of the new assessment practices will not only require a high level of teacher assessment literacy, but will also depend upon the extent to which teachers’ conceptions of assessment align with the philosophical underpinnings of the reform. Research has indicated that teachers’ beliefs serve to filter information entering the cognitive domain, to frame particular educational situations or problems and to guide teachers’ intentions and actions (Fives & Buehl, 2012). In light of this evidence, the introduction of new assessment initiatives should take account of how teachers conceive of the nature and purpose of assessment. Adopting a non-experimental cross-sectional design, this study surveyed a large sample (n=489) of post-primary teachers using the abridged version of Brown’s (2006) Teachers’ Conceptions of Assessment Inventory (TCoA-IIIA). This 27-item self-report instrument is designed to elicit teachers’ level of agreement with four intercorrelated assessment factors (i.e., school accountability, student accountability, improvement and irrelevance). Quantitative data derived from the survey were analysed using a mixture of descriptive statistics, exploratory factor analysis, independent samples t-tests and one-way analysis of variance. Maximum likelihood exploratory factor analysis resulted in a 5-factor solution for the Irish data which differed somewhat from Brown’s (2006) original model. Implications of the results for the conceptualisation of assessment in the Irish post-primary context are considered.
Gurhy, A. (2017). Using assessment for learning to enhance the teaching and learning of mathematics in one Primary school: a lesson study approach. Doctor of Education thesis, Dublin City University
Over the course of one academic year, this practitioner research study investigated the impact of AfL practices on the teaching and learning of mathematics at fourth-class level in one primary school. Specifically, it explored how the use of AfL principles, strategies and techniques affected students’ attainment on standardised mathematics tests and their dispositions towards mathematics. Additionally, the research investigated the potential of lesson study (LS) as a vehicle of collaborative professional learning in AfL and considered the impact engaging in LS had on teachers’ skills, knowledge, and use of AfL, and their beliefs towards AfL as a form of assessment. This study also provided unique insights into learners’ perspectives of using AfL in mathematics, both teachers and students. Findings revealed significant effect size gains in children’s confidence, motivation and attitudes regarding mathematics, although there was no appreciable difference in students standardised mathematics scores when compared to the comparison group. Additionally, indications are that teachers found LS to be a very effective model of CPD in AfL.
This paper sets out to outline current discussions in Ireland around teachers being responsible for assessing their own students’ work, and the subsequent impact such a perspective is having (or not) on the delivery and assessment of physical education in Ireland. This discussion is particularly timely given the very recent endorsement for the introduction of the new Leaving Certificate Physical Education as a full optional subject. The authors begin by discussing more specifically assessment in Irish primary and post-primary schools, drawing attention to the limited Irish assessment-related research being conducted in both contexts. They then explore assessment developments related to Irish primary physical education and post-primary physical education and compare the extent to which such developments are limited in comparison to international assessment interests and practices in physical education. The paper concludes with suggestions related to studying (pre-service) teachers’ and students’ exposure to assessment in order to understand how to alter the balance of assessment purposes and uses in Irish schools.
Giving students a choice of assessment methods is one approach to developing an inclusive curriculum. However, both staff and students raise concerns about its fairness, often described as its equity. This study investigates their perceptions of the fairness of the procedures and outcomes of this approach to assessment, in nine modules in a University setting. Using a tool validated as part of the study, students’ views on procedural fairness were gathered (n = 370 students). In addition, seven module co-ordinators were interviewed. A seven-step approach to the design of the approach was used. The results demonstrated that students were satisfied that their assessment choices were fair in levels of support, feedback, information and, to a lesser extent, student workload and examples of assessment methods. In exploring fairness of the outcomes, the students’ grades were not significantly different between the two sets of choices. However, based on staff interviews, the overall grades were higher than previous cohorts and higher than average for current student cohorts in the institution. The discussion highlights some of the complex issues surrounding fairness (equity) using assessment choice and, in addition, the paper refers to some practical tools for its implementation.
Exploiting the potential that Assessment for Learning (AfL) offers to optimise student learning is contingent on both teachers’ knowledge and use of AfL and the fidelity with which this translates into their daily classroom practices. Quantitative data derived from the use of an Assessment for Learning Audit Instrument (AfLAI) with a large sample (n = 594) across 42 primary schools in the Republic of Ireland serve to deprivatise teachers’ knowledge and use of AfL and the extent to which AfL is embedded in their work. The data confirm that there is urgent need for high-quality teacher professional development to build teacher assessment literacy. However, fiscal constraints coupled with the fractured nature of current provision renders it impossible to offer sustained support on a national scale in the immediate term. In response, this paper proposes the adoption of a design-based implementation research approach to site-based collaborations between researchers, teachers and other constituent groups, such as that engaged in by the authors over recent years, as a mechanism for addressing teachers’ needs in a manner that also supports other participants’ professional interests.
Teachers’ capabilities to conduct classroom assessment and use assessment evidence are central to quality assessment practice, traditionally conceptualised as assessment literacy. In this paper we present, firstly, an expanded conceptualisation of teachers’ assessment work. Drawing on research on teacher identity, we posit that teachers’ identity as professionals, beliefs about assessment, disposition towards enacting assessment, and perceptions of their role as assessors are all significant for their assessment work. We term this reconceptualisation Teacher Assessment Identity (TAI). Secondly, in support of this conceptual work, we present findings from a systematic review of self-report scales on teacher assessment literacy and teacher identity related to assessment. The findings demonstrate that such scales and previous research exploring teacher assessment practices have paid limited attention to what we identify as essential and broader dimensions of TAI. We share our reconceptualisation and analyses to encourage others to consider teacher assessment work more broadly in their research.
This report profiles the documented assessment practices across a sample of 30 undergraduate degree programmes. The study also explores whether and how assessment practices differ between fields of study and also shares insights regarding students’ experiences of assessment across Irish higher education.
The assessment of educational progress and outcomes of pupils is important to all concerned with education. This includes testing which is undertaken for accountability and award bearing purposes. This article examines how students with special educational needs and disability (SEND) are included in assessment. An “inclusive assessment” framework is outlined based around three core features: (1) all students are included and benefit from assessment; (2) assessments are accessible and appropriate for the diverse range of children in the education system; and (3) the full breadth of the curriculum is assessed (including curriculum areas of particular relevance to students with SEND). Assessment policies and practice in three countries (England, Ireland and the US) are drawn upon to demonstrate how the framework usefully enables between-country comparisons and within-country analysis. This analysis shows that in comparison to Ireland, the US and England have highly developed system-based approaches to assessment which seek to “include all” (feature 1) and be “accessible and appropriate” (feature 2). However, the analysis highlights that a consequence of such assessment approaches is the narrowing of the curriculum around topics that are assessed (most notably literacy and mathematics). Such approaches therefore may be at the expense of wider curriculum areas that have value for all students, but often of particular value for those with SEND (feature 3). It is argued that within such systems there may be a danger of neglecting the third feature of the inclusive assessment framework, i.e. ensuring that the full breadth of the curriculum is assessed. A consequence of such an omission could be a failure to assess and celebrate progress in relation to educational outcomes that are relevant to a diverse range of students.
In this paper, Harrison, O’Hara & McNamara criticize the assessment system in Irish education, which they believe continues to rely solely on traditional ‘teacher-centred’ methods. Arguing that this fosters a sense of dependency in learners and undermines their potential to become self-reliant individuals, they advocate for the introduction of self- and peer-assessment (S&PA) strategies, beginning as early as possible in the education system. Following this, they document their investigation of S&PA applied to student group work, which included 11 teachers and 523 students across a range of contexts, from primary and post-primary classrooms to further education settings with early school leavers and senior learners. In each setting, students selected criteria they believed to be important in the process of group work, and marked both themselves and their peers according to these criteria. This was then combined with the teacher’s mark for the overall product. Following analysis of interviews with the teachers in question, observations, and a research journal, Harrison et al suggest that S&PA can be as valid and rigorous as traditional assessment, and that it helps students to become self-directed and independent learners. They acknowledge the need for longitudinal studies to determine its true value and benefits, but nonetheless encourage its use.
This article explores the “inclusive assessment” framework, outlining how the concept of inclusivity can refer not only to who is assessed, but also to how pupils are assessed and what is assessed. The assessment policies and practices of Ireland, England and the U.S. are then examined and compared in terms of the extent to which they exhibit these features of inclusivity. Findings reveal that Ireland falls behind both the US in England on the first two of these three criteria, in that national assessments of literacy and numeracy at primary level in Ireland do not include all children with SEN, and do not offer accommodated or alternative versions to ensure that pupils with SEN can be assessed appropriately. On the other hand, the more deeply ingrained culture of national testing in both the US and England may have the unintended consequence of narrowing the range of curriculum outcomes assessed, thus detracting from inclusivity, as some of the neglected areas may include those that are of particular concern to certain SEND groups. This may be less of an issue in Ireland, where mandatory testing at primary level had only recently been introduced, and is conducted less frequently. Continued analysis of assessment practices in these three countries is advised to track the development of these issues.
The PISA 2009 results for Ireland indicated a large decline in reading literacy scores since PISA 2000 (the largest of 38 countries). The decline in mathematics scores since PISA 2003 was the second largest of 39 countries. In contrast, there was no change in science achievement since PISA 2006. These results prompted detailed investigations into possible reasons for the declines, particularly in reading. This paper considers the changes in achievement observed for Ireland in PISA 2009 under two themes: implementation of PISA in Ireland and changes in the cohort of students participating in PISA, and response patterns on the PISA test (as measures of student engagement). It is argued that the case of Ireland represents the 'perfect storm’, since a range of factors appear to have been in operation to produce the results. The discussion attempts to show how the case of Ireland can be relevant to other countries which may have experienced changes in PISA test scores over time. Some of the findings have relevance to international practice in large-scale surveys of educational achievement more generally.
This paper reports on the experiences of five primary school teachers as they implemented formative assessment strategies in the context of physical education. Each teacher planned and delivered a series of 6-8 lessons based on the Primary Schools’ Sports Initiative lesson plans, and selected a variety of written and verbal assessment strategies to examine their pupils’ learning within these lessons. Their experiences of the process were recorded using a combination of focus groups and reflective journals. Qualitative analysis of these data suggested that the process enhanced teachers’ knowledge, pupils’ learning experiences, and the ‘status’ of physical education from the perspective of the learner. Lessons became more structured and learning more explicit. Some challenges were noted, such as guiding pupils in how to engage in peer assessment, and adapting the assessment strategies to suit the context of infant classes. Ni Chroinin & Cosgrave conclude that the use of formative assessment strategies has a positive effect on teaching and learning in the context of physical education, but that the current lack of guidance surrounding the design of these strategies during initial teacher education presents an obstacle to their future use.
This paper describes the design, development and trialling of the Assessment for Learning Audit Instrument (AfLAi), an instrument designed to support teachers in reviewing their knowledge, skills and practices in formative assessment. Lysaght and O’Leary revisit Lysaght’s (2013) concerns regarding the mismatch between teachers’ mental models and basic AfL strategies, and suggest that this may explain why AfL has failed to take hold to the extent that might have been expected, given the wealth of literature attesting its value. The AfLAi is then offered as a practical first step in addressing this issue. Findings arising from a trial of the instrument support its psychometric structure and provide a snapshot of current formative assessment practices in Irish primary schools.
Lysaght, Z. (2013). The professional gold standard: Adaptive expertise through assessment for learning. In F. Waldron, J. Smith, M. Fitzpatrick & T. Dooley (eds.), Re-imagining initial teacher education: Perspectives on transformation (pp.155-176). Dublin: The Liffey Press.
In this chapter, Lysaght calls for the foregrounding of Assessment for Learning in pre-service teacher education. She outlines how the value of the AfL pedagogy extends beyond its well-documented positive effects on learning, through its ability to promote the development of ‘adaptive expertise’, and considers evidence which suggests that these expertise are not yet pervasive in Irish schools. With this in mind, she urges consideration of how pre-service learning environments may be redesigned, such that they model practices compatible with the AfL philosophy, and, in doing so, challenge the outdated mental models of assessment, teaching and learning to which teachers were exposed during their own schooling.
Lysaght, Z. (2012). Towards inclusive assessment. In T. Day & J. Travers (Eds.) Special and inclusive education: A research perspective (pp.245-260). Oxford: Peter Lang.
This chapter presents findings from an evaluation of a site-based teacher learning community (TLC) designed to increase teachers’ understanding and use of AfL, employed in a disadvantaged junior school over a ten month period. Comparisons of pre- and post-intervention reading scores of the experimental and control groups suggest that the intervention did not have an effect on pupils’ reading achievement overall, but further analysis focusing on pupils with SEN show that it may have helped these children maintain their reading level. Of particular note; however, is that additional outcomes revealed substantial changes in children’s use of AfL approaches during reading as a result of the intervention. Lysaght notes that these findings highlight the bluntness of standardised assessments, and argues that a truly inclusive education system is contingent on the development of more sensitive assessment tools that can capture subtle changes in children’s learning.
Using an action research approach, this research explores the impact of an AfL programme on third class pupils’ writing and self-assessment skills. Following a series of lessons on writing, pupils selected pieces of their work for inclusion in a portfolio, accompanied by reflections on their choices. These portfolios were then used during pupil-involved parent-teacher meetings. Darcy notes that although the programme was associated with increases in preparation and lesson time, it yielded an increased sense of teacher-pupil collaboration, and positive reactions from pupils and parents alike.
This research investigates the implementation of an AfL programme consisting of eight science lessons in a junior infant classroom, using an action research approach. Data collected from interviews, observation notes and researcher’s reflective journal suggest that AfL strategies were successfully introduced, with pupils reportedly achieving gains in learning and associated language development. The challenge of developing AfL materials appropriate for this age group and the need for on-going professional development in this area are noted.
This research investigates the use of AfL strategies in physical education. Macken and O’Leary report how sharing learning intentions and success criteria at the start of a series of PE lessons yielded a range of positive outcomes for pupils, including heightened awareness of learning, more positive attitudes towards PE, and more effective use of time during lessons. There are many parallels between these findings and those of Collins & O’Leary (2010), with both studies demonstrating the potential benefits of AfL in areas of the primary school curriculum not traditionally associated with the concept of assessment.
This research investigates attitudes towards standardised assessment in Ireland, comprising a survey of 30 primary school teachers and an interview with a DES Inspector. McNamara considers themes such as accountability, and the phenomenon of 'teaching to the test', that have recurred in both Irish and international literature on standardised assessment, and seeks to explore the extent to which these themes are reflected in classroom practice, in light of the increasing emphasis placed on this assessment format in Irish primary schools. The findings reveal an appreciation of the benefits of standardised assessment amongst the teachers surveyed, coupled with a keen awareness of their limitations and appropriate uses, which McNamara deems 'commendable'. Also evident from this research; however, is that the majority of teachers experience the pressure of 'accountability' associated with standardised assessments from a range of sources, and that approximately one third of teachers in engage in activities to prepare their students for the tests during class time, some of which are deemed unethical and detrimental to more meaningful forms of learning. Based on these findings, McNamara offers a number of recommendations for future research and practice.
This paper posits that the increasingly ‘child-centred’ nature of the primary school curriculum, although commendable, may inadvertently facilitate an unacceptable breech of children’s privacy, with ‘patterns of disclosure’ of information pertaining to personal and family life now a fundamental element of pupils’ schooling experience. With regard to assessment, Hanafin et al. draw attention to classroom questioning, observation, peer assessment, and the provision of feedback in group situations. Their argument is not that these practices should be abolished, rather that they should be accompanied by an awareness of the potential negative consequences, and means of alleviating these identified.
In this theoretical paper, Dunphy argues that increasing understanding of assessment and curriculum as inter-related constructs necessitates greater consideration of how children’s learning from birth to six years may be formatively assessed. Theoretical constructs related to early learning are discussed, and, on the basis of these, a variety of methods and approaches that may be used for the formative assessment of early learning are presented. These include the observation of children’s behaviour and actions, the skilful use of questioning and of multiple modes of communication (e.g. gaze, facial expression, gestures) during conversation with young children, and the compilation of portfolios that serve as a record of learning, amongst others. Challenges and additional requirements associated with these methods are also noted, including their time-consuming nature, and the importance of a close personal relationship between the educator and the child.
In this paper, Collins & O’Leary note that, despite the recommendations outlined in the Revised Primary School Curriculum of 1999 that assessment should be an integral part of teaching and learning within all areas of the curriculum, there remains a tendency to perceive it as inappropriate in the context of the visual arts. In response to this issue, they compared two types of lessons on the Fabric and Fibre strand of the art curriculum with fifth class pupils: one with the use of ‘success criteria’ as a method of peer- and self-assessment and one without. Teachers’ and pupils’ experiences of each lesson type were recorded by means of a reflective journal and a questionnaire, respectively, and thematically analysed. The findings suggest that incorporating the use of success criteria yielded a range of positive outcomes, including greater focus on the task during lessons, reduced frustration and increased willingness to engage amongst certain pupils, more constructive feedback from the teacher, and greater variation in the artwork produced. Collins & O’Leary argue that these findings contradict concerns that success criteria may encourage routine compliance and thus reduce the learner’s independence. They conclude that the integration of assessment with teaching and learning in the visual arts is possible.
Lysaght, Z. (2009). From Balkanisation to Boundary Crossing: Using a Teacher-Learning Community to Explore the Impact of Assessment On Teaching and Learning in a Disadvantaged School. Doctor of Education Thesis, Dublin City University.
This study examined the potential of a teacher learning community, as a vehicle of professional development, to bring about changes in teachers’ understanding and use of Assessment for Learning (AfL), in order to improve the reading competency of a cohort of children attending a designated disadvantaged, junior school, in the Republic of Ireland. Employing a partially mixed, concurrent equal status design (Onwuegbuzie, 2007) and an interpretive framework, the study tested three research hypotheses. Although no statistically significant changes in effect sizes were found between the control and experimental groups from standard reading attainment data, significant findings were reported both in relation to teachers’ knowledge, skills and attitudes to AfL and the approaches to reading adopted by the children in the experimental group. These findings, in turn, highlight the potential of a TLC, reconceived as a boundary zone (Star, 1989), to challenge the traditional Balkanisation of teachers’ working lives (Hargreaves, 1994).
McCrudden, E. (2009). Questioning for appropriate assessment and learning. Master of Science, Dublin City University
This thesis investigates three forms of assessment in chemistry – the Leaving Certificate Examination (summative) and two distinct continuous assessment methods used during undergraduate chemistry modules in Dublin City University (formative). Referring to Bloom’s Taxonomy of Cognitive Domains, McCrudden reports that most questions in the Leaving Cert. exam are written at the lower levels, and that some topics are not frequently assessed, while others are over-assessed. Analyses of the third-level assessments reveal that these formative methods can encourage students to take a more active role in their learning, and increase their engagement with the material.
In this paper, MacRuairc argues that the possibility of an inherent bias in standardised assessments has not received sufficient consideration when attempting to explain the lower levels of attainment typically observed in disadvantaged schools. He reports on a series of focus groups in which children from both middle- and working-class backgrounds were asked to describe the strategies they employed when responding to items on standardised tests. Children from both backgrounds described similar strategies, such as seeking a context for target words from the test in their own experiences. This revealed a marked discontinuity between the linguistic register of the test instrument and the linguistic repertoire of the working class children. MacRuairc argues that the use of the dominant linguistic code in standardised tests ‘negates a whole way of being’ and erodes the self-efficacy of working-class pupils. He thus cautions that continued used of standardised assessments that fail to acknowledge the language variety used by specific groups may exacerbate stratified patterns of achievement.
This is a reflective and exploratory piece of action research investigating how Assessment for Learning (AfL) practices may be implemented in the Irish language classroom through
the use of e-portfolios. Clerkin documents how her second class pupils developed self- and peer-assessment skills and gradually became more autonomous in their learning, through the process of compiling a selection of their work over time using ICT. She simultaneously reflects on her own experiences, citing the challenge of maintaining a balance between supporting her students and allowing them freedom to explore. The need to move towards more formative approaches to the assessment of Irish is expressed, and it is suggested that e-portfolios may be an effective tool to this end.
In the wake of repeated calls for a programme of teacher professional development in assessment in Ireland, this paper provides a suggested menu of topics for inclusion in such a programme. Informed by both international literature and national documents relevant to the Irish context, the proposed programme encompasses numerous aspects of assessment, including assessment terminology, the use of performance assessment tasks to improve learning processes, interpreting standardized test results, facilitating pupil self-assessment, understanding how assessment can cater for a range of pupil abilities, issues associated with grading, and the challenge of communicating assessment information, among many others. O’Leary notes that any such programme should also take into account the research indicating that teacher professional development is more effective when it is school embedded, co-operative and sustained over time.
This research examines the use of Assessment for Learning techniques, including self- and peer-assessment, in the context of English writing with a group of third class students. O’Callaghan reports that these strategies developed students’ competence in identifying strengths and weaknesses in various genres of writing, and promoted skills of reflection, self-correction and independence in writing.
In this paper, Looney & Klenowski consider how the concept of the ‘knowledge society’ has fuelled not just educational ‘reform’, but a thorough reconceptualization of many key components of education. They then demonstrate how this has been reflected in practice, via case studies of recent changes in curriculum and assessment in Ireland and Queensland, Australia. In Ireland, the NCCA’s review of senior cycle education and the associated consultation process have culminated in the identification of a number of ‘key skills’ (e.g. personal effectiveness, critical thinking, working with others) as the core of a proposed new curriculum. There is no accompanying explanation of how these skills will be assessed, however. In Queensland, the development of the “Queensland Assessment Task” is described as offering a promising opportunity to capture rich information about student achievement in a range of processing skills. Looney & Klenowski compare the two cases, drawing parallels between their emphases on concepts such as ‘skills’ and ‘learning power’, as opposed to ‘content’ and ‘information’. They argue that both policy initiatives illustrate a true transformation in education, but that precisely how assessment practices, in particular high-stakes testing, will be informed by this transformation remains unclear.
This study explores the AfL strategies of sharing learning intentions and identifying success criteria in a phonics lesson for children with mild general learning difficulties, and the introduction of a Teacher Learning Community (TLC), a professional tool designed to support the implementation of AfL strategies in the classroom. Data were collected through the process of observation, documentation from the TLC meetings and Lesson Reviews, children’s lesson sheets and a researcher’s log. The findings reveal a number of challenges associated with the use of AfL strategies in the special classroom.
This action research investigates the implementation of self- and peer-assessment strategies in the context of process writing with fourth class pupils. Pupils received a series of lessons on writing, during which specific learning targets and success criteria were introduced. Data were collected through observations of the lessons, pupils’ writing samples, focus groups with pupils and interviews with teachers. Feedback to and from peers, self-reflection, feedback from the teacher, and engagement emerged as significant themes facilitating the process of self- and peer-assessment. Elements of writing that improved throughout the course of the ten week study included word usage, lead sentences, character descriptions and titles. On the basis of these findings, Lambert identifies areas for further research.
This is a comprehensive document considering the role of assessment in primary education. It opens with an overview of the developments pertaining to assessment that took place during the period 1997-2008 as a result of the Education Act (1998), the Revised Primary School Curriculum (1999), and the introduction of mandatory standardised testing at two stages during primary schooling. In Chapter 2, three major purposes of assessment are identified and explored, namely: (i) to support the process of teaching and learning, (ii) to report on pupils’ progress and (iii) accountability. It is acknowledged that all three are valid, and that the purpose of an assessment should determine the type of assessments to be used. Chapter 3 outlines the general assessment policies and practices in Ireland, whilst Chapter 4 reports on the findings arising from a questionnaire administered to teachers relating to specific assessment practices in schools. Chapter 5 considers international assessment practices, whilst Chapter 6 offers a series of recommendations regarding assessment policy and practice for the future, including the allocation of time for planning for assessment, the provision of professional development in relation to assessment, and the development of standardised assessments of Irish, amongst others. The second half of the document presents the proceedings of the Consultative Conference on Education (2008).
In this paper, O’Leary puts forward a model of what he considers to be a balanced assessment system for Irish schools. He opens by considering the increasing prominence of assessment in the Irish education discourse, and provides a comprehensive definition of the term, drawing attention to its expansion in recent years to include the notion of assessment for learning as well as assessment of learning. He contends that good assessment should inform decision-making, acknowledging the challenges arising from the fact that various stakeholders in education have different decisions to make. These diverse needs, he argues, are what necessitate a balanced assessment system, and he laments the situation in many countries, whereby bureaucratic requirements are prioritised at the expense of teaching and learning. His principle argument is that assessment should serve the needs of learners first and foremost, and he makes a series of recommendations as to how this might be achieved. These include prioritising classroom assessment and resisting the introduction of mandated national testing.
Six years on from Hall’s criticisms of assessment policy and practice in primary schools, this paper provides an overview of the developments that have taken place since then, whilst simultaneously considering the situation at post-primary level. Looney notes that certain cultural and economic factors in Ireland have fostered widespread faith in its education system, with the result that the response to any issues arising is typically to demand additional resources, as opposed to seeking fundamental change. She sees this as the major reason behind the fact that ‘assessment-led reform’, which has heavily featured in educational policy internationally, has not yet occurred in Ireland. Looney echoes many of Hall’s initial concerns regarding primary school assessment, but notes some progress, namely an emerging dialogue surrounding good practice, in which teachers are included, and increasing recognition of the need for professional development in the area of assessment for learning. In response to the recent government proposal to introduce mandated standardized testing in literacy and numeracy, she warns against the ‘assessment hierarchy’ that this may create, and reiterates the many different purposes of assessment. Finally, Looney argues that post-primary assessment is also in urgent need of reform, (specifically, a move away from the focus on high-stakes state examinations) but that little to no progress has been made in this domain in comparison to the emerging changes evident at primary level.
In this paper, Kilfeather, O’Leary and Varley document their development of a set of performance-based tasks to facilitate science assessment in primary schools. This project was conducted over a four-year period in response to both the introduction of science as a subject area at primary level, and the emphasis on assessment as an integral part of teaching and learning under the revised primary curriculum of 1999. Five distinct stages of the project are described. Phase One involved locating extant performance tasks used in five English-speaking countries with science curricula at primary level, whilst Phase Two involved adapting these tasks to match the aims and objectives of the Irish curriculum. In Phase Three, a selection of the tasks were sent to a representative sample of primary teachers for evaluation, and in Phase Four, 11 of the tasks were evaluated ‘in action’ in different classroom settings. Evaluations of the tasks revealed a predominantly positive response, with teachers reporting active involvement of teachers and pupils in the tasks, pupil enjoyment of the tasks, and the potential to use the assessment information gleaned from the tasks in different ways. Finally, in Phase Five, amendments were made to the tasks on the basis of these evaluations. Kilfeather et al note that, as a result of this project, 124 tasks are now well aligned with the Irish primary science curriculum, and may be used for teaching, learning and assessment in science in Irish primary schools.
This research investigates the use of learning stories as a form of assessment over a ten-week period with a junior infant class. Ennis reports how this approach successfully highlighted learning in subjects such as language, mathematics, science, visual arts and music, and was associated with a high level of parental participation and engagement. The time required to implement the approach is noted as a significant challenge.
In this paper, Little considers the learner-centred approach and the integration of self-assessment with other forms of assessment in the context of second language learning. He reports on a project that has drawn on the Common European Framework of Reference for Languages (CEFR) and the European Language Portfolio (ERP) to define an English as a Second Language (ESL) curriculum for newcomer pupils in Irish primary schools, and outlines plans to develop assessment and reporting procedures for the ESL curriculum, in which learner self-assessment plays a central role.
Murphy, R. (2000). The validity of a portfolio approach to instruction and assessment in writing in the primary school. Master of Arts thesis, Dublin City University.
In light of the paradigm shift from summative to formative assessment, and the associated need to develop a range of alternative modes of assessment, this research explores the portfolio approach to assessing pupils’ writing at primary level. The Educational Research Centre’s Drumcondra Writing Project, which saw the collation of portfolios of children’s written work in natural settings over the course of the 1995-1996 school year, provided a springboard for this research. A selection of these portfolios was identified as being of especially high quality, and the teachers and pupils in question were invited to participate in this further study. Over the course of two years, portfolios of these pupils’ work were examined, and both pupils’ and teachers’ experiences of the processes were sought through written reflections and semi-structured interviews respectively, with a view to thoroughly exploring the use of portfolios as assessment tools in these exemplar cases. Murphy concludes that portfolios allow for and generate the use of formative assessment techniques, and the incorporation of feedback that is relevant for individual children. She invites further study on the use of portfolios for assessment purposes in other subject areas of the primary school curriculum, especially those for which assessment procedures are not currently evident, such as music, physical education, and visual arts.
Following on from Hall’s (2000) criticisms of assessment policy under the Revised Primary School Curriculum of 1999, Hall & Kavanagh report the findings from a series of interviews conducted with various interest groups in Irish primary education. They contend that, due to the lack of clarity in many aspects of the policy, the manner in which these groups understand the purposes and forms of assessment will determine how it will eventually be implemented. Stakeholders interviewed included teachers, parents, the then ‘shadow’ Minister of Education, the Chief Executive of the National Parents Council, a senior inspector at the DES and a senior official at the NCCA. Hall & Kavanagh conclude that these groups hold markedly different views about assessment, with each group tending to interpret the purposes of assessment primarily in relation to its own needs, rather than the needs of the learner. They also note a somewhat unanimous confidence in formal, standardized tests. On the basis of these findings, they emphasize the need for greater discussion and informed debate amongst these interest groups, and draw attention once more to the wealth of literature supporting the use of assessment to inform and guide pupils’ learning (as opposed to solely for purposes such as accountability).
In this paper, Hall provides a comprehensive evaluation of the Irish policy position on primary school assessment. She first chronicles key events that influenced the development of the policy, including the 1990 Review on the Primary Curriculum, the 1992 Green Paper on Education, the NCCA’S Programme for Reform (1993), the National Education Convention (1994), and the 1995 White Paper on Education, culminating in the recommendations for assessment set out in the draft document of the Revised Primary School Curriculum (1997). She documents a move towards and subsequently away from the marketization of education throughout this process, and associated discontinuities in the discourse surrounding assessment, most notably summative vs. formative understandings of assessment, and whether or not assessment should be used as a mechanism for teacher and school accountability. She then describes the policy in detail, outlining what she sees as its major strengths and weaknesses. Recognition of the importance of formative assessment practices is lauded as a key strength of the policy, due the alignment of this stance with contemporary research literature. Hall argues; however, that this is not accompanied by concrete guidelines on how to implement these practices. Furthermore, she believes that the policy’s assertion that all forms of assessment should have parity of esteem is heavily contradicted by the disproportionate attention afforded to standardised tests and their functions. She considers the factors that may have led to these undesirable features, and concludes with a number of recommendations. These include, among others, the redeployment of funds set aside for the development of standardised tests towards the provision of teacher training in formative assessment, and the need to ensure that all those involved in policy development are suitably informed by contemporary research.
Covid-19 Contingency Plans for Assessments
As third level institutions worldwide decided to cancel end of term exams for the majority of students as a result of the covid-19 pandemic, many of us were required to design new assignments and tasks to assess our students. The resources here were made available to help plan for these changes.
Advice for Choosing Alternative Assessments
Video explaining Pass/Fail Assignments
Summary Infographic on Pass/Fail Assignments
Exemplar illustrating Original v Pass/Fail Assignments
A Primer on Differences between Norm-Reference Based and Criterion-Referenced Assessments
A Primer on Criterion-Referenced Assessments and Rubrics
A Primer on Norm-Reference Based Assessment and Grading on the Curve
A Primer on Performance Standards, Cut Scores and Weights
Weighted Rubric with Scoring Guide (Example)
Guidance on Moving from Exams to Pass/Fail Assignments
Lecture Resources
The 2019 CARPE Lecture: Dr. Mathias von Davier, 'What you always wanted to know about Process Data but were too afraid to ask'.
Blog Posts
In the Spring of 2022, CARPE Prometric Post-Doctoral Researcher Dr. Gemma Cherry and CARPE PhD candidate Conor Scully attended the Association of Test Publishers (ATP) Innovations in Testing conference, Orlando Florida, and the American Educational Research Association (AERA) conference, San Diego California. This blog post highlights some of their key takeaways in relation to the latest trends and innovations in the field of educational assessment.
ATP
The ATP 2022 conference theme was: Bridging Opportunities for Better Assessment, with many of the presentations focusing on how the assessment industry can learn, adapt, and grow in light of lessons learned from the Covid-19 pandemic. Of key interest to conference attendees was how issues of diversity and inclusion can be moved forward. The ATP conference brought together industry leaders and various discussions were held surrounding the rapidly changing nature of assessment. Of particular interest to Gemma and Conor were presentations focusing on post-covid assessment environments. At the conference Gemma, along with Prometrics’ Dr. Li Ann Kuan gave a presentation on the psychometric comparability of remote proctoring and in-centre modes of assessment administration. This presentation was very well received by delegates and generated a significant amount of discussion regarding the promising results.
Remote proctoring was frequently mentioned at ATP and while this is not a new topic, the context in which this mode of administration is operating has significantly changed. It is clear that remote proctoring is here to stay and it is likely that, going forward, a hybrid option will be available to test takers who can choose to sit their examinations at either a traditional testing centre or remotely. A question of particular relevance asked at the ATP conference was, what role will paper-based testing play in the future? It was highlighted that in some instances, paper-based assessments may still be the optimal method for assessment to be fully accessible and inclusive. A number of speakers said that it would be important for the testing industry to be proactive and to put plans in place to ensure accessibility, as opposed to being reactive to challenges as they arise (as happened during the pandemic). While remote proctoring has become a standard offering, it remains to be seen how it can be adapted to ensure accessibility and appropriateness for all test takers. Strong arguments were made at various sessions that we should not become complacent about modal comparability despite the positive outcomes from research indicating that remote proctoring outcomes are similar to those observed for test centres.
AERA
The theme for AERA 2022 was Cultivating Equitable Education Systems for the 21st Century. A significant number of the talks were centred on the theme of diversity, equality, and inclusion (DEI), which was also a major theme at ATP. Issues of social justice within education are more relevant than ever, and numerous discussions took place relating to the impacts of race, gender, and sexuality in relation to educational outcomes. In addition to scheduling panels and papers on this issue, the conference organisers also made sure to elevate the voices of people belonging to minority groups, with the welcome spectacle of panels discussing the experiences of black Americans within the education system comprised entirely of black men and women.
Gemma was lucky enough to have two presentations over the course of the AERA conference, one relating to her work with CARPE (on remote proctoring) and the other focusing on her doctoral work in which she discussed educational inequalities across urban and rural locations from an intersectional perspective.
Despite reduced in-person attendance due to the impact of Covid-19 (participants had the option to participate virtually), the conference schedule was packed with on-site papers, panels, and roundtables. At any point there were approximately sixty sessions taking place! As a result, Conor and Gemma had to be strategic and organised about what to attend. Conor focused his time on discussions relating to mixed methods research (which is relevant to his doctoral work) and LGBTQ+ issues, while Gemma attended multiple sessions on remote proctoring and rural education (her PhD topic).
In addition to attending sessions, the conference also provided an opportunity to meet and socialise with other young researchers, and to see the sights of San Diego.
A particular highlight was a dinner organised by Dr. Larry Ludlow of Boston College and attended by numerous current and former students of BC (including CARPE Director, Michael O’Leary who completed his PhD in BC in 1999).
Attending these conferences gave Gemma and Conor the opportunity to network with experts in the assessment and wider educational research fields. Their experiences have proved to be very beneficial to their work at CARPE and they can now bring new insights, understandings and perspectives to current trends and innovations in assessment. Both are very grateful for the funding provided by Prometric which made all of this possible.
In school, children are required to listen to the teacher, follow instructions and complete independent work. These requirements rely on children’s ability to pay attention over time. Researchers call this type of attention, sustained attention. It is commonly referred to as ‘concentration’ or ‘focus’ in everyday life. In school, the importance of sustained attention for learning has been demonstrated in studies that show an association between students’ sustained attention and their academic achievements as well as their classroom behaviour. However, despite its importance, sustained attention difficulties are a common problem in children with as many as 24% of children exhibiting poor concentration. Attentional problems compromise academic achievement. This evidence highlights the strong need to develop evidence-based interventions to enhance students’ sustained attention.
In recent years, cognitive attention training has been identified as a potential intervention to enhance sustained attention. This type of training is sometimes called brain training. Training involves the repetitive practice of a cognitive task design to exercise parts of the brain related to attention. The current study sought to evaluate the efficacy of a school-based attention training programme, Keeping Score!, in improving students’ sustained attention. Training was based on silently keeping score during a fast-paced game of table tennis, which required children to exercise their sustained attention. To test the impact of training, we conducted a small scale randomised controlled trial. Randomised controlled trials are regarded as the gold standard for evidence efficacy. In the study, we assigned children to either a training group or a control group. Both groups received 3 training sessions per week for 6 weeks. The control group completed the same activity as the training group except they were not required to mentally keep score; the score was called out by the researcher as each point was won. We measured sustained attention before training, immediately following training and at a 6-week follow up period. Contrary to our expectations, we found no improvements in sustained attention following training.
The obvious question is why were no improvements found in sustained attention? There are various reasons that could potentially explain the null findings such as our low training duration and sample size. Another potential reason that we argue in the paper is perhaps cognitive attention training is not sufficient to enhance sustained attention. This is because our capacity to sustain attention at any moment is determined by an interplay of cognitive, emotional, arousal and motivational factors. Cognitive attention training primarily targets the cognitive factors. Other interventions that target the multiple factors underlying sustained attention such as mindfulness training may have more success in improving students’ ability to pay attention.
So, can concentration be improved using cognitive training? The answer is not a simple yes or no. This study suggests that it is very difficult to enhance students’ ability to pay attention using cognitive training methods.
The paper from this study is currently under review. Please email Eadaoin (eadaoin.slattery@dcu.ie) for a study preprint.
References
Döpfner, M., Breuer, D., Wille, N., Erhart, M., & Ravens-Sieberer, U. (2008). How often do children meet ICD-10/DSM-IV criteria of attention deficit-/hyperactivity disorder and hyperkinetic disorder? Parent-based prevalence rates in a national sample–results of the BELLA study. European Child & Adolescent Psychiatry, 17(1), 59-70.
Rabiner, D. L., Murray, D. W., Skinner, A. T., & Malone, P. S. (2010). A randomized trial of two promising computer-based interventions for students with attention difficulties. Journal of Abnormal Child Psychology, 38(1), 131-142.
Slattery, E. J., Ryan, P., Fortune, D. G., & McAvinue, L. P. (2022). Unique and overlapping contributions of sustained attention and working memory to parent and teacher ratings of inattentive behavior. Child Neuropsychology, 1-23.
Steinmayr, R., Ziegler, M., & Träuble, B. (2010). Do intelligence and sustained attention interact in predicting academic achievement? Learning and Individual Differences, 20(1), 14-18.
While paper-based assessments are largely restricted to traditional multiple choice or short answer questions, the range of items possible for digital assessments are more extensive and continue to expand as technology develops. In particular, the use of static (e.g. high-definition images), dynamic (e.g. videos, animations) and interactive (e.g. simulations) multimedia stimuli has allowed test developers to reimagine what knowledge, skills and abilities that can be assessed (Bryant, 2017). Therefore, it is hardly surprising that education systems around the world are now attempting to devise their own digital assessments for post-primary students (e.g. Ireland, New Zealand). Unfortunately, while there is a broad faith that these digital assessments can improve the quality and scope of the testing process, the exact nature of this added value, if it exists, has yet to be properly described or verified (Russell & Moncaleano, 2019). To begin to address this research gap, two related research studies were undertaken to investigate the extent to which the use of different multimedia stimuli can affect test-taker performance and behaviour.
For Study 1, an experiment was conducted with 251 Irish post-primary students using an animated and text-image version of the same digital assessment of scientific literacy. Eye movement and interview data were also collected from subsets of these students (n=32 and n=12 respectively) to determine how differing multimedia stimuli can affect test-taker attentional behaviour. The results indicated that, overall, there was no significant difference in test-taker performance when identical items used animated or text-image stimuli. In contrast, the eye movement data collected revealed practical differences in attentional patterns between conditions. This finding indicates that the type of multimedia stimulus used in an item can affect test-taker attentional behaviour without necessarily impacting overall performance. This is significant as understanding how stimulus modality can affect test-taker behaviour will ultimately support the quality of inferences that can be drawn from digital assessments.
Study 2 involved 24 test-takers completing a series of simulation-type items where they generated their own data to answer the test questions. Eye movement, interview and test-score data were gathered to provide insight into test-taker engagement with these items. Based on the data gathered, increasing test-taker familiarity with simulation-type items can affect test-taker attentional behaviour leading to more effective test-taking strategies. Furthermore, the data revealed that successful test-takers directed significantly more of their attention to the relevant areas of the simulation output. However, generating large volumes of data can disrupt the predictive properties of these behaviours. Examination of other process data variables (e.g. time-on-task, number of simulations run) showed that some of the most common interpretations ascribed to frequencies of these behaviours (e.g. Grieff et al., 2015) are context and subject specific.
Taking into consideration the recent initiatives involving digital assessments for the Leaving Certificate Examination (State Examination Commission, 2021) as well as the ‘Digital Strategy for Schools’ (Department of Education and Skills, 2021), the findings of this research will be particularly pertinent to Irish educational policy makers. However, they also have relevance well beyond the Irish context. In particular, this research provides test-developers worldwide with insights as to how item features and test-taker attentional behaviours influence the psychometric properties of assessments and the inferences drawn from the data they provide.
Reference:
Lehane, P. (2021). The Impact of Test Items Incorporating Multimedia Stimuli on the Performance and Attentional Behaviour of Test-Takers (PhD Thesis). Institute of Education: Dublin City University, Ireland.
Other references:
Bryant, W. (2017). Developing a Strategy for Using Technology-Enhanced Items in Large Scale Standardized Tests. Practical Assessment, Research & Evaluation, 22(1), 1–5. https://scholarworks.umass.edu/pare/vol22/iss1/1/
Department of Education and Skills (DES). (2021). Digital Strategy for School Consultation Framework [Website]. https://www.education.ie/en/Schools-Colleges/Information/Information-Communications-Technology-ICT-in-Schools/digital-strategy-for-schools-consultation-framework.html
Greiff, S., Wüstenberg, S., & Avvisati, F. (2015). Computer-generated log-file analyses as a window into students' minds? A showcase study based on the PISA 2012 assessment of problem solving. Computers & Education, 91, 92– 105. https://doi.org/10.1016/j.compedu.2015.10.018
Russell, M. & Moncaleano, S. (2019). Educational Assessment Examining the Use and Construct Fidelity of Technology-Enhanced Items Employed by K-12 Testing Programs. Practical Assessment, Research & Evaluation, 24(4), 286-304. https://doi.org/10.1080/10627197.2019.1670055
State Examination Commission. (2021). Leaving Certificate Computer Science. https://www.examinations.ie/?l=en&mc=ex&sc=cs
The Objective Structured Clinical Examination (OSCE) is an assessment format common in the health sciences, and nursing in particular. In an OSCE, a student moves through an exam hall, completing a series of “stations” at which they undertake a specific task or series of tasks, such as measuring a patient’s vital signs. Students are judged at each station by an expert examiner, who awards them marks on the basis of a marking guide that has been designed for that station. A key advantage of the OSCE is that all students complete the same stations, and are judged according to the same set of criteria.
Because of the standardisation inherent in the OSCE, it is generally thought to produce consistent scores that can be used to make accurate decisions about students’ relative performance levels; as such, it is frequently used as a summative assessment in undergraduate nursing programs, to determine whether a student has demonstrated sufficient mastery of the curriculum to progress to the next year of study. However, as with all forms of assessment, it is important to document evidence that decisions made on the basis of OSCE data are not only reliable (consistent), but most importantly, valid (accurate).
Assessor cognition is a relatively recent field of inquiry that seeks to understand the processes by which examiners of performance assessments come to make judgements about students. In theory, it should be possible to ensure that all examiners interpret the marking guide in exactly the same way, such that the only factor affecting the scores awarded to students is their different ability levels. However, research has consistently shown that assessors bring with them a range of individual factors that influence how they interpret student performances, and that even rigorous professional development and training is not enough to bring about complete agreement regarding a student’s performance.
As it stands, the cognitive processes assessors employ when coming to make judgements about nursing students are unknown, and something of a “black box”. Research on assessors in medicine has indicated that they lack a fixed sense of what constitutes “good” performance, and are therefore likely to judge students against their own subjective ideas about competence; tend to focus on different aspects of performance when determining whether a student is “good”; and have difficulty translating the verbal descriptions about performance into a numerical score. However, it is unclear whether nursing assessors judge students in the same way as their counterparts in other medical fields.
This lack of clarity concerning nursing assessors’ judgement processes represents a possible threat to the reliability of nursing OSCE scores, as individual students’ scores may fluctuate on the basis of who happens to be assessing them, rather than on their ability. Indeed, research on the reliability of nursing OSCE scores has suggested that it is often at a less than desirable level. As such, opening the “black box” of nursing assessor judgements should lead to a better understanding of how judgements are made, and, ultimately, a greater level of defensibility regarding decisions made on the basis of OSCE scores.
In educational research literature across the world, those factors influencing pupils’ educational attainment outcomes are well documented e.g. gender, socioeconomic status (SES) and ethnicity. However, the issue of whether educational outcomes are influenced by the geographical location (urban-rural) in which students live and education takes place, has been much less focused on. Internationally, urban locations are taken for granted as the norm in research and are presupposed when nothing else is stated (Bæck, 2016). This presumption overlooks the fact that many children and young people live their lives in rural locations. This oversight has led to a lack of empirical evidence and understanding about how rural pupils may become educationally (dis)advantaged. This lacuna was the reason the research described here was undertaken. The study presents the first high-quality analysis of the relationship between rurality and educational attainment outcomes in the Northern Ireland (NI) context.
Prior to this study, in NI the only information available on the relationship between location and education were annual statistical publications. These publications, produced by the Department for Agriculture, Environment and Rural Affairs (DAERA), contained descriptive statistics on rural educational advantage and focused on the post-primary sector only. It was clear that more detailed and high-quality analysis was required to gain a better understanding of what lay beneath these statistics.
The study utilised a comprehensive data set covering both primary and post-primary educational stages that had heretofore never been available for research purposes. The primary data were provided by Granada Learning (GL) Assessment and were the first data made available in the NI context that relate to primary level pupils. The post-primary data set contained complete cohort information and, for the first time, matched the NI Census to the School Leavers Survey and the School Census.
The statistical method employed for the research was multilevel modelling, an approach that takes account of the fact that pupils are clustered within schools (Hox, 2010). The attainment outcomes used were English and maths at primary level and GCSE English, GCSE maths and the overall number of GCSEs achieved by pupils at post-primary level.
The results obtained from this research provide evidence that rurality has a statistically significant influence on primary pupils’ English attainment outcomes but not on their maths attainment. An overall rural advantage was found, however, interaction results revealed that this advantage was not experienced by every rural pupil. Rural boys from lower SES backgrounds were subsequently identified as a group of pupils who are at risk of lower English attainment at this level of education. Rurality was also found to have a statistically significant influence on all three attainment outcomes at post-primary level. Furthermore, the results show that pupils who attend non-grammar schools and those pupils from lower SES positions are more greatly influenced by their location than pupils attending grammar schools and those from higher SES backgrounds. For example, the urban-rural achievement gap (to the advantage of rural pupils) was not as apparent in grammar schools as it was in in non-grammar schools, where it was much wider.
This study is significant in being the first to identify previously overlooked groups of pupils in need of additional educational support in NI. The evidence presented will be directly relevant to policy initiatives and programmes aimed at raising educational outcomes for disadvantaged students in NI.
Reference:
Cherry, G. (2021). Inequalities in Educational Attainment across Rural and Urban Locations: Secondary Data Analysis of Pupil Outcomes in Northern Ireland. PhD Thesis. School of Social Sciences, Education and Social Work: Queen’s University Belfast.
Other references:
Bæck, U. K. (2016). Rural Location and Academic Success—Remarks on Research, Contextualisation and Methodology. Scandinavian Journal of Educational Research, 60(4), 435–448.
Hox, J. (2010). Multilevel Analysis: Techniques and Applications (2nd ed.). New York: Routledge.
Cancellation of the Leaving Certificate (LC) examinations in 2020 as a result of COVID-19 and the subsequent involvement of post-primary teachers in estimating marks and ranks for their own students as part of the Calculated Grades (CG) process were unique events in the history of Irish education. Following the publication of LC results, and completion of the appeals process, an online questionnaire survey of post-primary teachers was conducted in the closing months of 2020 that investigated how this cohort of teachers engaged with the process of CG in their schools and if this experience has impacted how they perceive their role as assessors. Preliminary findings based on the responses of 713 teachers are contained in a report available to download here. A complementary paper published in Irish Educational Studies can be accessed here.
As reported, data revealed that the teachers surveyed used a wide range of assessment information when estimating marks and ranks for their students. Not surprisingly, the outcomes from 5th and 6th year exams, as well as mock exams, were particularly important in informing teachers' judgements. In commentary, respondents also highlighted the importance of other sources of information including their professional knowledge and expertise in State Examinations, the use of in-school tracking and assessment records, the use of historical State Examinations performance data, as well as student characteristics such as application to their work. Challenges identified by many respondents when estimating marks and ranks for their students included issues related to decision-making around grade boundaries, combining qualitative and quantitative assessment data, reconciling inconsistencies in student performance, maintaining an unbiased position with respect to individual students and voicing concerns about how school colleagues arrived at their decisions. Overall, however, the majority of teachers indicated that they felt the alignment meetings worked well and expressed confidence in their professional judgements. Almost all felt that they were fair to their students.
The pressure felt from members of the school's community, as well as the stress caused by having to engage in the process, were clearly articulated by many. Decisions around the release of rank order data to students caused many to express very strong feelings of annoyance and disappointment. A large number of comments focused on issues of fairness related to conscious and unconscious bias, approaches adopted by colleagues, the application of the DES guidelines, the use of school historical data and the impact of the national standardisation process on the grades awarded to students. While many were adamant that they would not engage in a calculated grades process in the future, some took a more nuanced view indicating overall satisfaction with the process in the context of exceptional circumstances and highlighting the potential benefits it offered some students. Opinion was divided on the extent to which the CG experience would inform efforts to reform Senior Cycle.