In conversation with Valentina Grion
Introduction
We started a conversation with Valentina Grion, expert in assessment and evaluation, through the Webinar Series. She has been working for several years in perspectives that support the students’ to be more active, starting from her work on Students’ Voice and her following studies on peer-assessment (with Anna Serbatti) and self-assessment (with Anna Serbatti and Beatrice Doria). Overall, she has been leading the debate in assessment and evaluation in the Italian context, in connection with the international panorama.
Earlier this spring Valentina shared with me this inspirational quotation:
“…Gross National Product counts air pollution and cigarette advertising, and ambulances to clear our highways of carnage. It counts special locks for our doors and the jails for the people who break them (…) Yet the gross national product does not allow for the health of our children, the quality of their education or the joy of their play. It does not include the beauty of our poetry or the strength of our marriages, the intelligence of our public debate or the integrity of our public officials. It measures neither our wit nor our courage, neither our wisdom nor our learning, neither our compassion nor our devotion to our country, it measures everything in short, except that which makes life worthwhile.”
Robert F. Kennedy, Remarks at the University of Kansas (1968)
We found out there was so much to explore together, for there were areas where our knowledge converge and others where it could be complementary. Here are some initial notes on our ideas.
Assessment and evaluation: how metrics came into play
Assessment and evaluation in education were a subject of quantification early in the history of the educational systems, given its crucial role to allegedly support transparency, accountability and effectiveness.
With the raising interest in the use of quantitative data and metrics by the positivism in science, no scientific activity could escape the need of measurements and hypothesis testing through statistical methods. Pedagogy was, as any other discipline, under this influence, and early in the XX measurements entered the education system to analyse the teachers and the students’ behaviour.
During the ‘50s and ‘60s of the last century, the International community became interested in the contribution of education to the national economies. Indeed, the relationship between education and economy growth had been early theorised by Adam Smith in “The Wealth of Nations” (1776, p. 137). In the WWII aftermath, the attention was put on Western countries economic development and the investments on any factors contributing to the economy were under the spotlights. The focus of policy makers and governments moved quickly to literacy as relevant factor contributing to the quality of the labour force and the base to live in democratic societies. The debate evolved in the following decades, covering the need of more and better skills for the capitalist, Western economies in a situation of cyclic crisis after the ‘80s.
By the end of that decade, though, a critique of the education systems increased particularly with the inability of the education system to respond to the industry’s skills demand (Mitch, 2005). In such a context, the discourse enclosed in the White Paper coordinated by Jacques Délors from UNESCO for the European Commission in 1996 very much emphasised the compelling need of modernising education, pointing out at “the treasure within learning” as metaphor. Though the reader might see in such metaphor an emphasis on more accountable systems in relation with the economic growth, the group coordinated by Délors attempted to go beyond such an idea, in search for a more complex vision of education. Nonetheless, the report was used to set the basis for the European educational and developmental policies where clear “benchmarks” of development were set, together with a number of indicators to measure such progress of the European education systems, updated through two programmes to establish actions and evaluation (European Commission, 2011). This context of enthusiasm for the measurement of progress was also associated with the US policies relating the “No Child Left Behind” from the early 2000. Supported by solid scholars and an elaborated approach to the selection of evidence (mainly quantitative), the efforts were all directed to determine which teaching practices “work, under which circumstances” (Slavin, 2002).
For formal education, but also for all forms of learning recognition relating informal and non-formal learning, assessment and evaluation became the beginning of a crucial pathway connecting the performance of the individual with the performance of the systems.
As a matter of fact, a recent effort to analyse the education system basing on the assessment of a specific set of skills and knowledge has been the OECD PISA programme (Programme for International Students Assessment, https://www.oecd.org/pisa/ ). It “measures 15-years-olds’ ability to use their reading, mathematics and science knowledge and skills to meet real-life challenges” (PISA’s presentation at the webpage). The effort to measure and compare the performance of national system has reached 79 countries in 2018, and some 600000 students representing about 32 million of 15-years olds (OECD, 2018). The PISA measurement has become so relevant that there are national efforts aimed at preparing the students to participate in the tests, and the results are widely cited and used in policy making (Biesta, 2015).
The critique on the evidence-based education and the PISA has shown the shortcomings of quantification.
In the first case, although the approach encompassed rigorous research work and the generation of evidence relating specific programmes on basic literacies (math and reading), the scholarship was critical about the lack of consideration of social and cultural factors contributing to a broader picture relating educational outcomes (Biesta, 2007).
As for the PISA tests, despite their careful design, many concerns have raised relating cultural differences not only between the “nations” (a concept loaded of ideology) but also within the same territories being compared. A review of two decades of literature on this international testing highlighted three fundamental deficiencies: its underlying view of education, its implementation and its interpretation and impact on education globally (Zhao, 2020).
Indeed, PISA refined tests and any other national testing system, which outcome is a metric of some sort, have had implications for the whole educational process. As Gert Biesta pointed out (Biesta, 2015):
“Quantitative measures that can easily be transformed into league tables and into clear statements about gain and loss between different data-trawls which, in turn, provide a clear basis for policy-makers to set targets for ‘improvement’ (see below) – such as gaining a higher place in the league table than apparent competitors, increasing national performance by at least a certain number of points or articulating the ambition to score ‘at least above average’ – give PISA a simplicity that is absent in complicated discussions about what counts as good education” (Biesta, 2015, p.350)
Assessment: beyond grading and the industry of metrics
Policy makers’ reliance on such international effort to measure educational outcomes could be seen as the resultant, in any case, of decades of assessment practice for measurement, where the concern on the system’s performance has overcome the sense and direction of a pedagogical practice.
This point was sharply captured in Brown’s words: “We may not like it, but students can and do ignore our teaching; however, if they want to get a qualification, they have to participate in the assessment processes we design and implement” (Brown, 2005, p. 81)
These words may seem a simple expression, a desire of focus on the pedagogical design of assessment. But they make visible the liaison between the metrics used in assessment as part of teaching and learning, with the metrics’ used to engineer the educational system’s accountability. Producing grades is just the basic operation aimed at the later steps connected to aggregating, summarising and comparing grades to set the basis to display and discuss educational quality at an institution or a country level. The latest consequence, as we pointed out before through Biesta’s ideas, is the way such metrics become actionable instruments of policy making.
Therefore, the careful focus by the educators to design assessment as a meaningful activity for the students, might be plenty of implications for the educational institutions and the system. One particular effect is the deconstruction of the anxiety for the grade as single relevant element to demonstrate the student’s skills and knowledge to take part in the society. In fact, the perverse effect of using grading to support the system’s analysis has been its impact on the students and teachers’ perception of the assessment practice as a bureaucratic operation, not part of the learning process.
As it was early theorised by Donald Campbell in the 70’s, there is a distortion in the quantitative representation of performance which moves from the metrics to analyse a phenomenon to become a driver of the actors’ behaviour (Vasquez Heilig & Nichols, 2013). The phenomenon is so frequent that a colloquial expression has been coined: ‘teaching to the test’. This stands for any pedagogical method heavily focused on preparing students for standardised tests (Styron & Styron, 2011). It pinpoints the depletion of meaning, the de-contextualisation and the lack of attention to students’ diversity, in an attempt by the educators to demonstrate the “quality of the system”. Moreover, those educators that defend their space of pedagogical practice and do the opposite in order to support diversity and equity, often fail in reaching the expected benchmarks of quality.
In his book “The Tyranny of Metrics”, Jerry Muller (2018) clearly depicts the problem of doing more harm than good through measurements within the context of the US system and the anxiety to measure “to see” the return of investments made on “closing the gap” relating basic literacies. In his analysis, the author arrives to a disarming conclusion:
“…the self-congratulations of those who insist upon rewarding measured educational performance in order to close achievement gaps come at the expense of those actually engaged in trying to educate children. Not everything that can be measured can be improved –at least, not by measurement.” (Muller, 2018, p.114).
To go beyond measurement, the experts have already pointed out the need of embracing assessment as complex and participatory process, where the students’ role becomes crucial in designing and applying assessment activities, might lead to assessment literacy, as final outcome beyond the grade (Boud, 1988; Grion & Serbati, 2018). This last concept stands for a set of abilities to assess self-determined learning and others’ learning in contexts other than the classroom, extremely relevant for our democratic societies. Needless to say, there is a long way to go in this sense (Medland, 2019).
If the question is how can we move from assessment for grading to assessment for learning, then technology is not the answer.
The beginning of the digital era only expanded the problem. Data was easy to generate and collect, leading to enthusiastic claims relating the educational transparency and accountability, hand in hand with discourses about new requirements for educators’ professional development relating data practices (Mandinach, 2012; Vanhoof & Schildkamp, 2014). Educators’ data literacy entered into the equation. In terms of Mandinach & Gummer (2016), in a context of evidence-based education, not only educational researchers might produce relevant data, but also the teachers. As a result, data use in education would become an imperative for the teacher preparation. Being ‘data literate’ for a teacher means having “the ability to transform information into actionable instructional knowledge and practices by collecting, analysing, interpreting all types of data (assessment, school, climate, behavioural, snapshot, longitudinal, moment-to-moment, etc.)” (Mandinach & Gummer, 2016, p.367).
In the case of Higher Education, the increasing development of data-extraction techniques and the digital, quantified representations upon the activity of virtual classroom has led to a new field of research and practice, namely, learning analytics (Ferguson, 2012), which has been plenty of implications for the educational community (Raffaghelli, 2018a). Indeed, the enthusiastic adoption of learning analytics was early connected with the assumption that more data-driven practices in teaching and learning would lead to educational quality and productivity (Siemens et al., 2014). Such an enthusiastic vision has been a driver of development and testing of increasingly complex learning analytics interfaces which should prevent students’ drop-out, inform teachers’ decisions on teaching effectiveness and be drivers of courses’ redesign (Viberg et al., 2018). The enthusiasm was also permeated by the possibility of automatizing and scaling learning solutions, supporting the learners independence and self-regulation across digital learning environments (Winne, 2017). The research within the hyperbolic context of MOOCs development in the last decade of 2010-2020 generated effervescence and the appropriate experimental contexts supporting such developments (Gasevic et al., 2014).
Moreover, such a data-driven approach to analyse and assess learning processes, including the same ability of the participant to engage, self-test, peer-evaluate and analyse her own learning achievements, has underpinned the discourses on the need of educators and students’ data literacy. Some authors connected data literacy with reading learning analytics to support learning design (Persico & Pozzi, 2015) and others claimed for teachers and students’ pedagogical data literacy to participate in digital, data-intensive and informed learning environments (Wasson et al., 2016). Nonetheless, for other scholars the discourses around data literacy required deeper reflection, covering also the crucial problem of data ethics connected to students data use and impact (Willis et al., 2016). This included data monetisation for the development of further digital services and products, bias and harm to vulnerable students (including gender, race, poverty as vectors of bias), lack or inappropriate of informed consent, etc. (Raffaghelli, 2020; Stewart & Lyons, 2021).
The pandemic only increased the debate, with the exacerbated adoption of private platforms collecting data and without clear consent and understanding from end users nor the institutions (Perrotta et al., 2020; Williamson et al., 2020). In fact, the entrance of platforms generated an entire new phenomenon: the monetisation of metrics and quantification through the adoption of data as resource to develop appealing learning analytics. As some argue (Williamson et al., 2020; Williamson & Hogan, 2021) , while the metrics adopted by the policymakers and the government have been adopted to “check-up” educational individuals, institutions and systems’ attainments against public benchmarks, the sole purpose of private platforms has been profit. A profit based on data extracted from engagement, participation and assessment to generate alluring data visualisations and recommender systems with the promise to personalise the pedagogical response: more “teacher presence” for less “teacher workload”. Reading between the lines, there is a trade-off between quantification and automation with the human pedagogical relationship, which Muller already considered damaged by the testing excesses in the era of “the measured educational performance”.
Until here, we brought to the fore the connections between and old “wine” problem (metrics for quality and the liaisons with assessment as pedagogical practice) presented in new “vessels” (digital data production, availability and manipulation to address the actors and the system’s response and direction). We also pointed out at a problem new direction, namely, cultivating data literacy as part of the educators’ professionalism and students’ skills within the datafied learning environments.
Can data literacy support assessment for learning practices?
While the landscape of educational datafication seems to be inexorable for higher education, a compelling question is: Can data literacy fix the problem of dealing with increasingly datafied assessment and evaluation in higher education and the education systems overall?
Our conversation left an open a window towards possible answer(s) to this question. We posit here that a more complex approach to assessment (for learning) and the resultant assessment literacy as desirable outcome, requires nowadays the integration of data literacy.
Nonetheless, we recall that data literacy is also a multi-perspective and polysemous concept. Therefore, there is a risk of embracing it as the plain acceptance of quantification and metrics aligned with a restricted idea of assessment (for grading). Instead, a holistic, critical approach to data practices and literacies in education (Raffaghelli et al., 2020), could support in a more plausible way a perspective of assessment for learning.
We believe our journey around these topics should connect assessment for learning and data literacy by understanding the debate around assessment as central pedagogical practice leading to deep learning and the assessment literacy. Though an authentic practice of evaluation and assessment as driver of critical awareness, appropriate and timely judgemental ability and active participation, our first question here is why there is so much resistance from the teachers and the students to change assessment practices.
The search for an answer to this sort of subsidiary question will lead us to explore the so-called crisis in the practice of assessment for grading and generating educational credentials, into a scenario pervaded by quantification and metrics. Is this situation making pressure over the pedagogical practice? We exchanged initial ideas around the strong connections between educational data collection and rankings as “the quantification of educational quality” in a context of increasing competition amongst universities. Our attempt here will be to show the numerous difficulties connected to building indicators, the excessive insistence on some indicators at the expenses of relevant (and never collected) information, including complex approaches to assessment.
Then we wil be prepared to liaise the situation of the “test industry” to the effect of digital technologies and data-driven practices had on data collection. We believe our analysis could be wrapped up by analysing the emerging practices of digitally enhanced assessment in the context of a datafied university. Of course, as it has been my tradition in this space, we engage with this analysis not only by exploring the role of data literacy, but mainly by exploring the potential of a “critical” and “pedagogical” data literacy in higher education.
Overall, our effort will be put in:
- Strengthening the idea of assessment as pedagogical practice which may or may not adopt and generate data, but which main outcome is not, certainly, to produce quantifiable representations of the educational process;
- Disentangling the narratives of data-driven practices within digitally mediated assessment, where glossy visualisations and dashboards should be consider just another representation which can or cannot be useful for the development of assessment for learning;
- Supporting critical data literacy, namely, the ability to read data as complex assemblage of practices and instruments, in order to re-imagining the development of assessment literacy.
Let’s see what we come up with…! 😀
References
Biesta, G. (2007). Why ‘what works’ won’t work: Evidence-based practice and the democratic deficit in educational research. In Educational Theory (Vol. 57, Issue 1, pp. 1–22). Wiley/Blackwell (10.1111). https://doi.org/10.1111/j.1741-5446.2006.00241.x
Biesta, G. (2015). Resisting the seduction of the global education measurement industry: Notes on the social psychology of PISA. Ethics and Education, 10(3), 348–360. https://doi.org/10.1080/17449642.2015.1106030
Boud, D. (1988). Developing Student Autonomy in Learning (First). Taylor and Francis.
Brown, S. (2005). Assessment for Learning. Learning and Teaching in Higher Education, 1(2004–05), 81–89. University of Queensland.
Delors, J., Al mufti, I., Amagi, I., Carneiro, R., Ching, F., Geremek, B., Gorham, W., Kornhauser, A., Manely, M., Padrón-Quero, M., Savané, M.-A., Singh, K., Stavenhagen, R., Won Suhr, M., & Zhou, N. (1996). Learning: The Treasure Within (pp. 1–46). UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000109590
European Commission. (2011). Europe 2020 flagship initiative Innovation Union. SEC(2010) 1161, Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, 1(0), 1–48.
Ferguson, R. (2012). Learning analytics: Drivers, developments and challenges. International Journal of Technology Enhanced Learning, 4(5–6), 304–317. https://doi.org/10.1504/IJ℡.2012.051816
Gasevic, D., Kovanovic, V., Joksimovic, S., & Siemens, G. (2014). Where is research on massive open online courses headed? A data analysis of the MOOC Research Initiative. In The International Review of Research in Open and Distance Learning (Vol. 15, Issue 5).
Grion, V., & Serbati, A. (2018). Assessment of learning or assessment for learning? Towards a culture of sustainable assessment in higher education. Pensa Multimedia.
Mandinach, E. B. (2012). A Perfect Time for Data Use: Using Data-Driven Decision Making to Inform Practice. Educational Psychologist, 47(2), 71–85. https://doi.org/10.1080/00461520.2012.667064
Mandinach, E. B., & Gummer, E. S. (2016). What does it mean for teachers to be data literate: Laying out the skills, knowledge, and dispositions. Teaching and Teacher Education, 60, 366–376. https://doi.org/10.1016/j.tate.2016.07.011
Medland, E. (2019). ‘I’m an assessment illiterate’: Towards a shared discourse of assessment literacy for external examiners. Assessment and Evaluation in Higher Education, 44(4), 565–580. https://doi.org/10.1080/02602938.2018.1523363
Mitch, D. (2005). Education and Economic Growth in Historical Perspective. In EH.Net Encyclopedia (Edited by, p. online). Economic History Association. https://eh.net/encyclopedia/education-and-economic-growth-in-historical-perspective/
Muller, J. (2018). The Tyranny of Metrics. Princeton University Press.
OECD. (2018). What is PISA? | PISA 2018 Results (Volume I) : What Students Know and Can Do. OECD Library. https://www.oecd-ilibrary.org/sites/609870a0-en/index.html?itemId=/content/component/609870a0-en
Perrotta, C., Gulson, K. N., Williamson, B., & Witzenberger, K. (2020). Automation, APIs and the distributed labour of platform pedagogies in Google Classroom. Critical Studies in Education, 00, 1–17. https://doi.org/10.1080/17508487.2020.1855597
Persico, D., & Pozzi, F. (2015). Informing learning design with learning analytics to improve teacher inquiry. British Journal of Educational Technology, 46(2), 230–248. https://doi.org/10.1111/bjet.12207
Raffaghelli, J. E. (2018). Educators’ Data Literacy Supporting critical perspectives in the context of a “datafied” education. In M. Ranieri, L. Menichetti, & M. Kashny-Borges (Eds.), Teacher education & training on ict between Europe and Latin America (pp. 91–109). Aracné. https://doi.org/10.4399/97888255210238
Raffaghelli, J. E. (2020). Is Data Literacy a Catalyst of Social Justice? A Response from Nine Data Literacy Initiatives in Higher Education. Education Sciences, 10(9), 233. https://doi.org/10.3390/educsci10090233
Raffaghelli, J. E., Manca, S., Stewart, B., Prinsloo, P., & Sangrà, A. (2020). Supporting the development of critical data literacies in higher education: Building blocks for fair data cultures in society. International Journal of Educational Technology in Higher Education, 17(1), 58. https://doi.org/10.1186/s41239-020-00235-w
Robert F. Kennedy. (1968). Remarks at the University of Kansas, March 18, 1968. John F. Kennedy Presidential Library and Museum; Robert Kennedy Speeches. https://www.jfklibrary.org/learn/about-jfk/the-kennedy-family/robert-f-kennedy/robert-f-kennedy-speeches/remarks-at-the-university-of-kansas-march-18-1968
Siemens, G., Dawson, S., & Lynch, G. (2014). Improving the Quality and Productivity of the Higher Education Sector. White Paper for the Australian Government Office for Learning and Teaching.
Slavin, R. E. (2002). Evidence-Based Education Policies: Transforming Educational Practice and Research. Educational Researcher, 31(7), 15–21. https://doi.org/10.2307/3594400
Smith, A. (1776). An Inquiry into the Nature and Causes of the Wealth of Nations (online fre). Creech, Mundell, Doig, Stevenson. https://play.google.com/books/reader?id=xTpFAAAAYAAJ&pg=GBS.PP8
Stewart, B. E., & Lyons, E. (2021). When the classroom becomes datafied: A baseline for building data ethics policy and data literacies across higher education. Italian Journal of Educational Technology, online fir. https://doi.org/10.17471/2499-4324/1203
Styron, J., & Styron, R. A. (2011). Teaching to the test: A controversial issue in measurement. IMSCI 2011 – 5th International Multi-Conference on Society, Cybernetics and Informatics, Proceedings, 2, 161–163.
Vanhoof, J., & Schildkamp, K. (2014). From ‘professional development for data use’ to ‘data use for professional development’. Studies in Educational Evaluation, 42, 1–4. https://doi.org/10.1016/J.STUEDUC.2014.05.001
Vasquez Heilig, J., & Nichols, S. L. (2013). A Quandary for School Leaders: Equity, High-stakes Testing and Accountability. In L. Tillman & J. J. Scheurich (Eds.), Handbook of Research on Educational Leadership for Equity and Diversity. Routledge.
Viberg, O., Hatakka, M., Bälter, O., & Mavroudi, A. (2018). The current landscape of learning analytics in higher education. In Computers in Human Behavior (Vol. 89, pp. 98–110). Pergamon. https://doi.org/10.1016/j.chb.2018.07.027
Wasson, B., Hansen, C., & Netteland, G. (2016). Data Literacy and Use for Learning when using Learning Analytics for Learners. In S. Bull, B. M. Ginon, J. Kay, M. D. Kickmeier-Rust, & M. D. Johnson (Eds.), Learning Analytics for Learners, 2016 workshops at LAK (pp. 38–41). CEUR.
Williamson, B., Eynon, R., & Potter, J. (2020). Pandemic politics, pedagogies and practices: Digital technologies and distance education during the coronavirus emergency. In Learning, Media and Technology (Vol. 45, Issue 2, pp. 107–114). Routledge. https://doi.org/10.1080/17439884.2020.1761641
Williamson, B., & Hogan, A. (2021). Education International Research Pandemic Privatisation in Higher Education: Edtech & University Reform. Education International.
Willis, J. E., Slade, S., & Prinsloo, P. (2016). Ethical oversight of student data in learning analytics: A typology derived from a cross-continental, cross-institutional perspective. Educational Technology Research and Development, 64(5), 881–901. https://doi.org/10.1007/s11423-016-9463-4
Winne, P. H. (2017). Learning Analytics for Self-Regulated Learning. In Charles Lang, George Siemens, Alyssa Wise, & Dragan Gašević (Eds.), Handbook of Learning Analytics (pp. 241–249). https://doi.org/10.18608/hla17.021
Zhao, Y. (2020). Two decades of havoc: A synthesis of criticism against PISA. Journal of Educational Change 2020 21:2, 21(2), 245–266. https://doi.org/10.1007/S10833-019-09367-X