Asking an entire class to read an entire textbook about educational/information technology would be a lot. 1400 pages of a lot. Luckily for our EDCI 570 class, our instructor didn’t do that. Instead, she tasked us with dividing ourselves into groups, choosing a section, and sharing what we learned with the rest of class via a video and written summary.
Infinitely easier. Phew.
Here are some of the biggest takeaways I had from my classmates’ presentations.
What are we doing with all the data?
Of all of the content covered in the assessment chapter of the Handbook, the piece about learning analytics intrigued me the most. Cheryl talked about how learning analytics are mostly used for comparisons between countries and for determining overall proficiency levels. She argued that there’s simply too much data to store and manage, and therefore much of the data gets discarded and goes unused. This struck me as both simultaneously true and untrue.
I would argue that learning analytics (essentially, the application and analysis of web analytics aimed at deciphering trends and patterns from huge sets of student data) are used for far more than comparisons between countries and determining overall proficiency levels. I use learning analytics all the time in my work. Within the courses I’ve developed in Moodle, I’m tracking and analyzing completion, dates of accessing different components, time spent on a specific activity or page, and more. I’m sometimes looking at this on an individual level, and sometimes looking at the data collectively to try and determine trends and patterns in order to identify areas for improvement in the course. I know I’m not the only one doing this.
At the same time, I think Cheryl’s right that a lot of data gets discarded and goes unused. I’m well aware that there are lots of things that Moodle is tracking and a lot of data they are collecting that I never use. Clicks on page, time logged in, location of access, and so much more. Which has me wondering… why are we even collecting this information?
When I conduct a survey, collecting information from students, I only ever ask questions and collect information that I know I’m going to use. It’s considered unethical to collect information and not use it. Why does the same concept not apply when it comes to our educational technology tools? Why do we allow these tools to collect oodles of information that we know we’re never going to use? Not only that- but information that students probably don’t even know is being collected. Where’s the ethics in that? I would love for our educational technology tools, and our learning management systems, in particular, to give educators more flexibility around selecting what data is collected, so that we can, in turn, be more transparent with our learners.
Is multimedia learning really that different from traditional classroom learning?
While viewing my classmates’ videos and reading through their summaries, I couldn’t help but notice how many similarities there were between the best practices and principles for the in-person and online settings. In the advanced multimedia section, we were introduced to a number of different principles and theories, including:
- Guided discovery learning process: Learners acquire knowledge through self-discovery, and by connecting new knowledge with existing knowledge.
- Learner control principle: When learners can exert control over things such as content, pace, ways of displaying information, etc., it improves engagement and allows for instruction adapted to student preference.
- Self-determination theory: If students have control over their learning, they will be more motivated.
- Collaboration principle: Collaborating while learning is effective when the task is cognitively demanding enough to warrant collaboration.
While all these principles are meant to be giving guidance to the use of multimedia for learning, they all seem to be equally as relevant to traditional methods of teaching. The collaboration principle, for example, would be equally as relevant to a group project that does not involve multimedia as it is to one that does. Similarly, learners acquire knowledge through self-discovery and by connecting new knowledge with existing knowledge regardless of whether they are learning from a digital tool, and paper-based book, or a teacher lecturing at the front of the room. The principles of learning, best practices, and good pedagogy are always relevant, regardless of whether learning is happening through digital technology or not.
When reading the digital equity section of the textbook for my own group’s project, the flip side of this became evident as well: the problems of traditional teaching are sometimes carried over to become problems of teaching with digital technology. Regardless of access to technology, learner outcomes were different depending on the learners’ socio-economic status. These same outcome differences, and the reasons behind them, sounded very similar to the outcomes of at-home reading programs. Regardless of whether technology was involved in the learning process or not, a student who did not have parental support and involvement in their at-home learning (among a number of other factors), was not going to reach the same outcomes.
We often think of teaching with digital technology as something new and different, but as these principles of multimedia learning demonstrate, it’s often not all that different after all. (Is this another argument for Clark’s side of the debate?!).
*Lest the reader fall under the spell of the principles and theories outlined above, I did want to mention that not all are empirically supported.
Proof that research has a purpose
One of the conversations that has come up in our course recently is around the purpose of research, and whether or not it’s useful to an educator. The argument has been made that research often takes too long to be published and to get to the educator, and by the time it does, the educator has already tested the technology in their classroom and refined it until it worked. Research is too behind to actually inform practice, so the argument goes.
To me, Clay, Sean and Jeremy’s section of the textbook, talking about the basic principles of multimedia learning, demonstrated the importance of research. Sometimes the principles and strategies that have become integral to our teaching aren’t actually the most effective, and only research can point this out to us. Several of the principles outlined in this section seemed to be counterintuitive to the way we often teach.
One example of this was the split attention principle, which says that when you have two sources of information that are supposed to be integrated, but are presented separately, it creates an excessive cognitive load and the learner will struggle. How often do we present a learner with a worksheet detailing how to use computer software and then get them to work through the worksheet while using the software in a computer lab? We’ve had classes like that in our own EdTech program, when learning about different software applications such as Excel.
This section also introduced us to the redundancy principle, which says that when information is presented in multiple forms simultaneously, learners may become confused. This means, for example, that when there is a picture and a caption describing the picture, we should not also be adding a summary to a body full of information. Summarizing and reiterating information is a mainstay of education (I don’t know how many times I’ve heard the phrase “the more times they hear it, the more likely it is to stick”), but this principle puts at least some parts of this practice into question.
Go, research, go!
Applying the learning to my everyday work
Of all of the chapters, I think my favourite was the ‘basic principles of multimedia learning’ chapter presented by Jeremy, Clay and Sean. Everything presented in this presentation seemed directly relevant to the work I do every day developing online learning materials. With every principle or model they talked about, I could see how decisions we made in developing our online pre-arrival program matched with the principle, or identify potential changes we could make based on these principles. Really, these principles were what I’d been looking for at the beginning of the project to help inform it’s development, but was unable to find because I didn’t have the right base of knowledge and proper search terminology. While Sean, Clay and Jeremy only discussed four different principles, they also mentioned that the textbook talked about over 20 principles. I’m excited to take a look at the others!
Is an AI-based essay marking tool possible?
In the assessment section, Ben brought up the idea of automated testing systems, particularly in conjunction with PISA testing (PISA is the OECD’s Programme for International Student assessment, which tests 15-year-old students from all over the world in reading, mathematics and science). Automated testing systems seem great; what teacher actually likes grading tests? Plus, bubble answer sheets have been all the rage in higher education for a long time. However, Ben also mentioned using these automated testing systems to mark essay questions, and alarm bells immediately went off in my mind.
I’m honestly not convinced it’s possible to have some type of artificial intelligence (AI) mark essays and not have the system be biased and discriminatory. Whose style of writing are we prioritizing and saying is ‘correct’? Which language and terminology would be considered ‘proper English’, and what phrases would students likely receive a deduction for? The concept of AI marking student essays reminds me of the Amazon fiasco in 2018, where they had to scrap an AI recruiting tool that showed bias against women- replicating the bias inherent in the human-centric recruiting system. Would an AI essay-marking tool end with the same fate?
Learning about everything reminds you what matters
Viewing all my classmates’ videos and reading their summaries was a good reminder of how much there is to think about when it comes to technology in education, and how many different areas of our education system technology can impact. As someone who can sometimes seem to develop an interest in everything, it was also a good reminder of what I’m truly interested in. The content in the videos about multimedia principles and assessment/analytics captured my attention and spurred my interest much more than those discussing technology and leadership, curriculum, and blended learning. I guess I know where to focus my attention moving forward!