Category Archives: learning

Thoughts on: National Students as Partners Roundtable 2016

I was recently invited by @UQKelly – Kelly Matthews of the University of Queensland – to attend the National Students as Partners Roundtable on a glorious Brisbane Spring day. (For which I am grateful almost as much for the chance to escape a particularly bleak Canberra day as for the exposure to some interesting ideas and wonderful people working in this space). This isn’t an area that I’ve had much to do with and I was invited to bring a critical friend/outsider perspective to proceedings as much as anything.

Students as Partners (which I’ll shorten to SaP because I’ll be saying it a lot) more than anything represents a philosophical shift in our approach to Higher Education, it doesn’t seem like too great a stretch to suggest that it almost has political undertones. These aren’t overt or necessarily conventional Left vs Right politics but more of a push-back against a consumerist approach to education that sees students as passive recipients in favour of the development of a wider community of scholarship that sees students as active co-constructors of their learning.

It involves having genuine input from students in a range of aspects of university life, from assessment design to course and programme design and even aspects of university governance and policy. SaP is described as more of a process than a product – which is probably the first place that it bumps up against the more managerialist model. How do you attach a KPI to SaP engagement? What are the measurable outcomes in a change of culture?

The event itself walked the walk. Attendance was an even mixture of professional education advisor staff and academics  and I’d say around 40% students. Students also featured prominently as speakers though academics did still tend to take more of the time as they had perhaps more to say in terms of underlying theory and describing implementations. I’m not positive but I think that this event was academic initiated and I’m curious what a student initiated and owned event might have looked like. None of this is to downplay the valuable contributions of the students, it’s more of an observation perhaps about the unavoidable power dynamics in a situation such as this.

From what I can see, while these projects are about breaking down barriers, they often tend to be initiated by academics – presumably because students might struggle to get traction in implementing change of this kind without their support and students might not feel that they have the right to ask. Clearly many students feel comfortable raising complaints with their lecturers about specific issues in their courses but suggesting a formalised process for change and enhancements is much bigger step to take.

The benefits of an SaP approach are many and varied. It can help students to better understand what they are doing and what they should be doing in Higher Education. It can give them new insights into how H.E. works (be careful what you wish for) and help to humanise both the institution and the teachers. SaP offers contribution over participation and can lead to greater engagement and the design of better assessment. After all, students will generally have more of a whole of program/degree perspective than most of their lecturers and a greater understanding of what they want to get out of their studies. (The question of whether this is the same as what they need to get out of their studies is not one to ignore however and I’ll come back to this). For the students that are less engaged in this process, at the very least the extra time spent discussing their assessments will help them to understand the assessments better. A final benefit of actively participating in the SaP process for students is the extra skills that they might develop. Mick Healey developed this map of different facets of teaching and learning that it enables students to engage with. A suggestion was made that this could be mapped to more tangible general workplace skills, which I think has some merit.

ct96sklusai7k6v

As with all things, there are also risks in SaP that should be considered. How do we know that the students that participate in the process are representative? Several of the students present came from student politics, which doesn’t diminish their interest or contribution but I’d say that it’s reasonable to note that they are probably more self-motivated and also driven by a range of factors than some of their peers. When advocating for a particular approach in the classroom or assessment, will they unconsciously lean towards something that works best for them? (Which everyone does at some level in life).  Will their expectations or timelines be practical? Another big question is what happens when students engage in the process but then have their contributions rejected – might this contribute to disillusionment and disengagement? (Presumably not if the process is managed well but people are complicated and there are many sensitivities in Higher Ed)

To return to my earlier point, while students might know what they want in teaching and learning, is it always what they need? Higher Ed can be a significant change from secondary education, with new freedoms and responsibility and new approaches to scholarship. Many students (and some academics) aren’t trained in pedagogy and don’t always know why some teaching approaches are valuable or what options are on the table. From a teaching perspective, questions of resistance from the university and extra time and effort being spent for unknown and unknowable outcomes should also be considered. None of these issues are insurmountable but need to be considered in planning to implement this approach.

Implementation was perhaps my biggest question when I came along to the Roundtable. How does this work in practice and what are the pitfalls to look out for. Fortunately there was a lot of experience in the room and some rich discussion about a range of projects that have been run at UQ, UTS, Deakin, UoW and other universities. At UoW, all education development grants must now include a SaP component. In terms of getting started, it can be worth looking at the practices that are already in place and what the next phase might be. Most if not all universities have some form of student evaluation survey. (This survey is, interestingly, an important part of the student/teacher power dynamic, with teachers giving students impactful  marks on assessments and students reciprocating with course evaluations, which are taken very seriously by universities, particularly when they are bad).

A range of suggestions and observations for SaP implementations were offered, including:

  • Trust is vital, keep your promises
  • Different attitudes towards students as emerging professionals exist in different disciplines – implementing SaP in Law was challenging because content is more prescribed
  • Try to avoid discussing SaP in ‘teacher-speak’ too much – use accessible, jargon-free language
  • Uni policies will mean that some things are non negotiable
  • Starting a discussion by focusing on what is working well and why is a good way to build trust that makes discussion of problems easier
  • Ask the question of your students – what are you doing to maximise your learning

These images showcase a few more tips and a process for negotiated assessment.

students as partners tips negotiated assessment process

There was a lot of energy and good will in the room as we discussed ideas and issues with SaP. The room was set up with a dozen large round tables holding 8-10 people each and there were frequent breaks for table discussions during the morning and then a series of ‘world cafe’ style discussions at tables in the afternoon. On a few occasions I was mindful that some teachers at the tables got slightly carried away in discussing what students want when there were actual, real students sitting relatively quietly at the same table, so I did what I could to ask the students themselves to share their thoughts on the matters. On the whole I felt a small degree of scepticism from some of the students present about the reality vs the ideology of the movement. Catching a taxi to the airport with a group of students afterwards was enlightening – they were in favour of SaP overall but wondered how supportive university executives truly were and how far they would let it go. One quote that stayed with me during the day as Eimear Enright  shared her experiences was a cheeky comment she’d had from one of her students – “Miss, what are you going to be doing while we’re doing your job”

On the whole, I think that a Students as Partners approach to education has a lot to offer and it certainly aligns with my own views on transparency and inclusion in Higher Ed. I think there are still quite a few questions to be answered in terms of whether it is adequately representative and how much weighting the views of students (who are not trained either in the discipline or in education) should have. Clearly a reasonable amount but students study because they don’t know things and, particularly with undergraduate students, they don’t necessarily want to know what’s behind the curtain. The only way to resolve these questions is by putting things into practice and the work that is being done in this space is being done particularly well.

For a few extra resources, you might find these interesting.

International Journal for Students as Partners – https://mulpress.mcmaster.ca/ijsap 

Students as Partners Australia network – http://itali.uq.edu.au/content/join-network 

Student voice as risky praxis: democratising physical education teacher education

UTS Student voice in university decision making

 

 

 

 

Thoughts on: Teaching online (in Teaching thinking: Beliefs and knowledge in Higher Education) (Goodyear, P. 2002)

Writing about work by your supervisor feels a little strange but, as adults and scholars, it really shouldn’t. Obviously there is a power dynamic and a question for me of what to do if I disagree with him. Putting aside the matter that Peter Goodyear has worked and researched in this field forever and is highly regarded internationally while I am essentially a neophyte, I’m almost certain that his worst reaction would be the slightest brow-crinkling and a kindly, interested “ok, so tell me why”. He even made the point that the research may now be dated but it could be worth following the citation trail. Fortunately none of this is an issue because, as you’d hope from your supervisor, it’s pretty great and there is much to draw from it.

In summary, this chapter focuses on understanding what and how teachers think when they are teaching online. Sadly perhaps, little has changed in the nature of online teaching in the 14 years since this was written – the online teaching activities described are largely related to students reading papers and participating in discussions on forums. This gives the chapter a degree of currency in terms of the technology (although a few questions emerged for me in terms of the impact of social media) and I imagine that little has changed in teacher thought processes in this time related to assessing and trying to engage students online.

In some ways it’s the methodology used in the study that is the most exciting part of this – it steers away from the sometimes problematic reliance on transcript analysis used often (at the time?) in research on online learning and makes more use of the opportunities for observation. Observing a teacher reading, processing and replying to discussion forum posts offers opportunities for insight into their thoughts that a far richer than one might get in observing face to face teaching. By using a combination of concurrent and retrospective verbalisation and interview, a rich picture emerges.

Concurrent verbalisation involves getting the tutor to keep up a kind of stream of consciousness dialogue as they work on the discussion posts, with the researcher prompting them if they fall silent for more than 10 seconds. This can prove difficult for the teacher at times as they need to stop speaking at times to concentrate on the replies that they write but a balance is generally found. The session is also videotaped and the researcher and teacher watch it back together, (‘stimulated recall’),  which gives the teacher the opportunity to discuss what they were thinking in the quiet moments as well as enabling them to expand on their recorded comments. In terms of understanding the things that are important to teachers and how they work with the students, I find this method really exciting. I’m not at all sure how or if it will align with my own research when I come to it but this rich insight seems invaluable.

The author opens the chapter by thoroughly going through the motivations for researching teaching – ranging from an abstracted interest in it as a good area for study to a more action research oriented focus on improving specific aspects of teaching practice. He explores the existing literature in the field – particularly in relation to online learning and finds that (at the time) there were a number of significant gaps in research relating to practice and he proceeds to set out six high level research questions relating to online teaching. It seems worthwhile sharing them here

  1. What are the essential characteristics of online teaching? What tasks are met? What actions need to be taken? Are there distinct task genres that further differentiate the space of online teaching?

  2. How do these practices and task genres vary across different educational settings (e.g between disciplines, or in undergraduate vs postgraduate teaching, or in campus based vs distance learning) and across individuals?

  3. For each significant kind of online teaching, what knowledge resources are drawn upon by effective teachers? How can we understand and represent the cognitive and other resources and processes implicated in their teaching?

  4. How do novice online teachers differ from expert and experienced online teachers? How do they make the transition? How does their thinking change? How does the knowledge on which they draw change? How closely does this resemble ‘the knowledge growth in teaching’ about which we know from studies of teaching in other, more conventional, areas?…

  5. What do teachers say about their experiences of online learning? How do they account for their intentions and actions? How do their accounts situation action in relation to hierarchies of belief about teaching and learning (generally) and about teaching and learning online?

  6. How do learners’ activities and learning outcomes interact with teaching actions? (p.86)

Skipping forward, Goodyear conducted the research with a number of teachers working online and identified several key factors that shape what and how teachers teach online. The focus of their attention – is it on the student, the content, how well the subject is going, whether students are learning, the technology, how students will respond to their feedback etc – can vary wildly from moment to moment. Their knowledge of their students – particularly when they might never meet them in person – can shape the nuance and personalisation of their communications. This also ties to “presentation of self” – also known as presence – which is equally important in forming effective online relationships. Understanding of online pedagogy and attitudes towards it are unsurprisingly a big factor in success in teaching online and this also impacts on their ability to manage communication and conflict in an online space, where normal behaviours can change due to perceived distance.

There were a lot of other noteworthy ideas in this chapter that are worth including here and it also sparked a few of my own ideas that went off on something of a tangent.

Those who foresee an easy substitution of teaching methods too frequently misunderstand the function or underestimate the complexity of that which they would see replaced (p.80)

Teaching is not an undifferentiated activity. What is involved in giving a lecture to 500 students is different from what is involved in a one-to-one, face-to-face, tutorial. Also, interactive, face-to-face, or what might be called ‘live’ teaching is different from (say) planning a course, giving feedback on an essay, designing some learning materials, or reflecting on end-of-course student evaluation reports. (James Calderhead structures his 1996 review of teachers’ cognitions in terms of ‘pre-active’, ‘interactive’ and ‘post-active reflection’ phases to help distinguish the cognitive demands of ‘live’ teaching from its prior preparation and from reflection after the event) (p.82)

The affordances of the user interface are an important factor in understand how online tutors do what they do. This is not simply because online tutors need to understand the (relatively simple) technical procedures involved in searching, reading and writing contributions. Rather the interface helps structure the tutors’ tasks and also takes some of the cognitive load off the tutor (P.87)

Studies of ‘live’ classroom teaching in schools have tended towards the conclusion that conscious decision-making is relatively rare – much of what happens is through the following of well-tested routines (Calderhead, 1984). While swift routine action can be found in online tutoring, its curiously asynchronous nature does allow more considered problem solving to take place (p.97)

Many of these ideas crystallise thoughts that I’ve come to over recent years and which I’ve shared with Peter in our supervision meetings. I’m going to choose to believe that his inner voice is saying at these points, ‘good, you’re on track’ rather than ‘well, obviously and I wrote about this a decade and a half ago’. This is why we go with this apprenticeship model I guess.

As for the other random thought that emerged from reading this paper was that as we get more comfortable with using video and asking/allowing students to submit videos as assessments, we’ll need new ways to ‘read’ videos. Clearly these will already exist in the scholarhood but they may not be as widely known as we need.

Final thoughts on DDLR / DDeLR

It feels like I’ve been banging on about this subject forever now but with assessments now finalised, it seems like a good time for a final wrap up.

In broad terms, I was a little disappointed with my students. It might have been a bad time of year to run this subject, with its demanding workload, but the majority of them seem to have only put in the absolute barest effort needed to pass. Assessment instructions which I thought were pretty clear weren’t followed and most of the reflections lacked any great insight. I had to ask many of them to rework and resubmit their assessments just to meet the minimum requirements.

What this does make me ask is whether this is the fault of my students or me.

As I haven’t taught formal classes for more than a decade, there are a lot of things that I haven’t had to deal with in teaching an ongoing subject with rigorous formal assessment. I also have a tendency at times to over-complicate things because it just seems like it makes them better. This probably also extends to my communication with my students and my expectations of them.

Fortunately, I am still keen to try this again.

Even during the marking process, as I had to walk away from the computer and swear at the walls, I was constantly reshaping the course structure, the assessments and the class activities in my mind to help avoid some of the issues that were arising. The fact that a handful of the “good” students were able to understand and follow my instructions suggests that I’m on the right track at least and am not entirely to blame but the fact that more than a few got things quite wrong does tell me that there is more work to be done.

I need to make it clearer that when students are creating draft learning resources, they actually need to be resources – things, objects – rather than broad and loose activity plans for a class. I need to explain clearly that the final learning resources should be the same as the draft learning resources but improved based on testing and feedback.  To be honest, these things seemed so self evident to me that I couldn’t conceive of anyone not getting it but there we are.

I tried to put into practice a number of ideas that I’ve encountered in the education design community about getting students more involved in designing parts of their own assessments but this really just confused more people than it helped. (Which was a shame as I do believe that it is a valid and valuable approach)

I tried to give my learners freedom to follow their particular learning needs and interests but for the most part this ended up just giving them the opportunity to follow the path of least resistance and allowed for some fairly lazy work. I also should’ve factored into my thinking that the first week of a class is often going to be plagued by technical (logins not working) and administrative hassles and try to make allowances for this in not expecting too much work to be achieved in the first week. (That said, we have a strong need to demonstrate engagement in class activities to receive funding for students that later drop out and I was certainly able to prove that)

I think next time around there will need to be a little less freedom, a bit more structure and lot more clarity and simplicity.

On the whole I am happy that I have managed to get these teachers doing things they haven’t done before and I think they have developed useful skills and knowledge. I’d just like to do more.

Designing DDLR & DDeLR – Assessments

Today is all about pinning down the most appropriate types of assessments for this subject. Yesterday I think I got a little caught up in reviewing the principles of good assessment – which was valuable but it might also be better applied to reviewing and refining the ideas that I come up with.

For what it’s worth, these are the notes that I jotted down yesterday that I want to bear in mind with these assessments. DDLR Assessment ideas

Looking over the four versions of this subject that my colleague J has run in the last 2 years has been particularly enlightening – even if I’m not entirely clear on some of the directions taken. The course design changed quite substantially between the second and third iterations – from a heavily class-based activity and assessment focus to more of a project based structure. (For convenience I’ll refer to the subjects as DDLR 1, 2, 3 and 4)

DDLR 1 and 2 provide an incredibly rich resource for learning to use eLearn (our Moodle installation) and each week is heavily structured and scaffolded to guide learners through the process of developing their online courses. The various elements of the units of competency are tightly mapped to corresponding activities and assessment tasks – moreso in DDLR 2. (Image from the DDLR subject guide)

I have to wonder however whether the course provides too much extra information – given the relatively narrow focus on designing and developing learning resources. Getting teachers (the learner cohort for this subject) to learn about creating quizzes and assignments in Moodle is certainly valuable but are these truly learning resources? This may well be one of the points where my approach to this subject diverges.

The shift in approach in DDLR 3 and DDLR 4 is dramatic. (As far as a diploma level course about designing learning resources might be considered dramatic, at least.) The assessments link far more closely to the units of competency and all save the first one are due at the end of the subject. They are far more formally structured – template based analysis of the target audience/learners, design documents, prototypes and finished learning resources, as well as a reflective journal.

It does concern me slightly that this subject has a markedly lower rate of assessment submission/completion that the two preceding ones. That said, this subject is often taken by teachers more interested in the content than in completing the units of competency and that may just have been the nature of this particular cohort.

This new assessment approach also seems far more manageable from a teaching/admin perspective than the previous ones, which required constant grading and checking.

My feeling is that this is a more sustainable approach but I will still look for ways to streamline the amount of work that is required to be submitted.

The next step was to map the various elements of competency to assessment items. The elements for both units of competency are written differently enough to need to be considered separately (unfortunately) but they both still broadly sit within the ADDIE (Analyse, Design, Develop, Implement, Evaluate) framework. ADDIE seems like a useful way to structure both the course and the assessments so I have mapped the elements to this. I have also highlighted particular elements that are more indicative of outputs that might be assessed. Working through the analysis process will be quite dry (and could potentially come across as slightly patronising) so finding an engaging approach to this will be important.

Photo of elements of competency mapped to ADDIE elements (I’m also quite keen to bring digital badges into this process somehow, though that’s a lower priority at the moment)

Finally, I had a few ideas come to me as I worked through this process today that I might just add without further comment.

DDLR / DDeLR ideas

Get the class to design and develop a (print based? ) checklist / questionnaire resource that might be used to address DDLR 1 and DDeLR 1 UoCs. Get someone else in the class to use it to complete their Analysis phase.

Can I provide a range of options for the forms the assessment/resource pieces might take?

Try to develop a comprehensive checklist that teachers can use on the resources that they produce to raise the quality overall of resources at CIT. (Again, this could be a student led tool – the benefit of this is that it makes them think much more about what a good resource requires – does this meet any UoCs??)

Convert the print based Analysis document into a web resource – book tool or checklist maybe? Also possibly fix the print based one first – from a deliberately badly designed faulty version. (Lets me cover some readability / usability concepts early)

How much of this subject is leading the learners by the hand? How much is about teaching them how to use eLearn tools?

Could one of the learning resources be about developing something that teaches people how to use a particular eLearn tool???

Need to identify what kinds of resources teachers can make. Good brainstorm activity in week 1.

Think about the difference between creating a learning resource and finding one and adding it to your course. (Still important but tied to the UoC?)

If I give teachers the option to use previously developed resources (authenticity issues??), they should still provide some kind of explanatory document AND/OR edit the resource and discuss what changes they made and why.

Need to consider the relative strengths and weaknesses of the various types of tools.

In-class feedback of learning resources to better support the evaluation and implementation based elements of competency.

One activity (possible assessment) could be for learners to gather information needed to do an analysis from a partner in the group. (and vice versa) Might lead to a more critical examination of what information is being sought. Learner might even provide suggestions for design/development?

Sample resources?