Category Archives: mooc

Ed Tech must reads – column #20

First published in Campus Morning Mail 1st Feb 2022

How not to write about HyFlex or online learning from Bryan Alexander

While most academic discourse follows intellectually rigorous conventions, there is one area that seems resistant to them. Commentary about technology enhanced and online learning, particularly from those who are new to it, often reveals a lack of understanding of the field and dwells instead on anecdata and laments for the good old days. Bryan Alexander steps through some of the most common flaws in these kinds of pieces in this entertaining post that calls for better conversations about this space. 

Reverse engineering the multiple-choice question from The Effortful Educator

Multiple-choice questions (MCQs) are invaluable for making assessment at scale manageable and providing learners with quick feedback about their understanding of material. As learning tools though, they can be superficial and rarely reflect authentic uses of knowledge. The alternate approach to MCQs laid out in this post asks students to craft questions that use provided answers instead – the Jeopardy! approach to quizzing perhaps. While it may be more labour intensive to assess, this adds a richness to these kinds of questions.

Framework for Ethical Learning Technology from ALT

As the education technology market has grown and usage has become the norm, valid questions have been raised about factors beyond learning and teaching benefits. What are the drivers for businesses and university leadership in using them? How do we ensure that the focus stays on what learners need? The UK’s Association for Learning Technology (ALT) is developing a framework in four quadrants – Awareness, Professionalism, Care and Community and Values – to help guide thinking in this brave new world.

Contemporary Approaches to University Teaching MOOC 2022 from CAULLT

Many universities offer some form of educational development to their teachers, but if yours doesn’t or you would like to supplement it, this MOOC developed by 10 Australian universities under the auspices of the Council of Australasian University Learnings in Learning and Teaching is a particular rich free course to consider. Enrolments for the 2022 offering (28/2 to 29/7) are now open. It covers everything from Teaching your first class to Collaborative learning and The politics of Australian Higher Education.  

Best puzzle games // 10 indie puzzle games you need to try from Cutie Indie Recs I’ve long believed that education can learn a lot from game design in terms of creating engaging and enriching learning experiences. This nine minute video from Cutie Indie Recs showcases some of the incredible variety and creativity that can be found in PC and mobile games now. I’m not entirely sure how to convert these to teaching but maybe inspiration will strike.

Ed Tech must reads – Column 5

First published in Campus Morning Mail 14th Sept 2021

Helpful tips for Hybrid teaching from Dr. Jenae Cohn (Twitter)

Hybrid or hyflex is one of those new modes of teaching that have seemingly materialised fully-formed in the last 18 months. It involves concurrently teaching face-to-face and online students, creating opportunities for them to learn and work together synchronously. This short thread from Dr. Jenae Cohn on Twitter offers some useful practical tips for teachers newly working in this space, including not referring to online participants as “people who are not here”.   

Australian Educational Podcasting Conference – October 6th and 7th

Audio offers an accessible and oftentimes more intimate way to connect with information. With lower technical barriers to entry for podcast creators than video, educators are embracing this format as a way to share and discuss ideas in a range of disciplines. The free Australian Educational Podcasting Conference returns on October 6th and 7th, with discussions about how people are using podcasts in teaching and practical workshops.

CAUDIT Higher Education Reference Models from CAUDIT

CAUDIT is the Council of Australasian University Directors of Information Technology. If you work at a member university, you can login to access a number of standard models that show how IT departments understand the many business and data aspects of a university ecosystem. At first glance this may seem a little niche, but for anyone with an interest in truly understanding how all the pieces fit together in a university, this is an invaluable resource.

The edX Aftermath from eLiterate

A couple of months ago, the open Harvard/MIT led MOOC platform edX announced that it was merging with the giant OPM (Online Program Management) business 2U. This represented a fairly significant swing to a more commercial orientation for a platform with lofty aims. Michael Feldstein from eLiterate has some strong feelings about this, in this informative article taking us through what has happened in the MOOC space since the big hype MOOC hype cycle of the early 2010s, and discussing what the next moves could and should be.

Is your smart fridge judging you? From Dan Hon (Twitter)

Finally, this amusing thread from @hondanhon on Twitter details some strange feedback he recently received from his Internet connected smart fridge.

SOCRMx Week #8: The End

Well I probably said all that I needed to say on my general feelings about this MOOC in my last post so this is largely for the sake of completion. The final week of this course is a peer assessed piece of writing analysing the methods used in a sample paper. Turns out that I missed the deadline to write that – I may even have been working on my Week 7 post when that deadline fell – so this appears to be the end of the road for me. I could still go through and do the work but I found the supplied paper unrelated to my research and using methodologies that I have little interest in. The overall questions raised and things to be mindful of in the assessment instructions are enough.

  • What method of analysis was used?
  • How was the chosen method of analysis appropriate to the data?
  • What other kinds of analysis might have been used?
  • How was the analysed designed? Is the design clearly described? What were its strengths and weaknesses?
  • What kind of issues or problems might one identify with the analysis?
  • What are the key findings and conclusions, and how are they justified through the chosen analysis techniques?

And so with that, I guess I’m done with SOCRMx. In spite of my disengagement with the community, the resources and the structure really have been of a high standard and, more importantly, incredibly timely for me. As someone returning to study after some time who has not ever really had a formal research focus, there seems to be a lot of assumed knowledge about research methodology and having this opportunity to get a birds-eye view of the various options was ideal. I know I still have a long way to go but this has been a nice push in the right direction.

 

Week #6 SOCRMx – Quantitative analysis

This section of the SOCRMx MOOC offers a fair introduction to statistics and the analysis of quantitative date. At least, enough to get a grasp on what is needed to get meaningful data and what it looks like when statistics are misused or misrepresented. (This bit in particular should be a core unit in the mandatory media and information literacy training that everyone has to take in my imaginary ideal world)

The more I think about my research, the more likely I think it is to be primarily qualitative but I can still see the value in proper methodology for processing the quant data that will help to contextualise the rest. I took some scattered notes that I’ll leave here to refer back to down the road.

Good books to consider – Charles Wheelan: Naked Statistics: Stripping the dread from data (2014) & Daniel Levitin: A Field Guide to Lies and Statistics: A Neuroscientist on How to Make Sense of a Complex World (2016)

Mean / Median / Mode

Mean – straightforward average.

Median – put all the results in a line and choose the one in the middle. (Better for average incomes as high-earners distort the figures)

Mode – which section has the most hits in it

Student’s T-Test – a method for interpreting what can be extrapolated from a small sample of data. It is the primary way to understand the likely error of an estimate depending on your sample size

It is the source of the concept of “statistical significance.”

A P-value is a probability. It is a measure of summarizing the incompatibility between a particular set of data and a proposed model for the data (the null hypothesis). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5366529/

“a significance level is an indication of the probability of an observed result occurring by chance under the null hypothesis; so the more you repeat an experiment, the higher the probability you will see a statistically significant result.”

Overall this entire domain is one where I think I’m only really going to appreciate the core concepts when I have a specific need for it. The idea of a distribution curve where the mean of all data points represents the high point and standard deviations (determined by a formula) show us the majority of the other data points seems potentially useful but, again, until I can practically apply it to a problem, just tantalisingly beyond my grasp.

Can I get a method: The EdinburghX SOCRMx Social Research Methods MOOC Week #1

MOOC Week #1 question responses

Making a blog post is part of the participation in the MOOC. I’m just going to put my answers here at the top so people don’t need to read the rest of my post about the MOOC and methods etc.

I’ve been working on this PhD for a little under two years now, so most of these questions I’ve covered in previous posts but will answer for the sake of the exercise.

  • What kind of topics are you interested in researching?

The relationships between edvisors (academic developers, education designers, learning technologists etc) and academics and institutional management

  • What initial research questions might be starting to emerge for you?

What strategies are used in HE to promote understanding of the roles and value of edvisors among academic staff, and more broadly within the institution? Which among these strategies are effective and why?

How do edvisors see their role and value in Higher Education institutions?

How are edvisor roles understood and valued by academics and HE management?

  • What are you interested in researching – people, groups, communities, documents, images, organisations?

People, groups/organisations, documents

  • Do you have an initial ideas for the kinds of methods that might help you to gather useful knowledge in your area of interest?

Currently leaning towards survey/interview and document analysis – job advertisements and organisational structures

  • What initial questions do you have about those methods? What don’t you understand yet?

Is this the best way to do what I want to do? Are there better alternatives?

  • Do you perceive any potential challenges in your initial ideas: either practical challenges, such as gaining access to the area you want to research, or the time it might take to gather data; or conceptual challenges; such as how the method you are interested in can produce ‘facts’, ‘truths’, or ‘valuable knowledge’ in your chosen area?

Not sure yet. I’m conscious that there might be sensitivities and politics to deal with.

Ok, so that’s the end of the ‘homework’ part of the blog. This next bit is what I’d already started writing about why I’m here and so on. 

One of the nice things that comes up from time to time when I discuss my research with academics is that they’ll excitedly start telling me about the methods and methodology that might be helpful. It’s a shame that no single suggested approach to data collection or analysis has been the same and that I don’t have a rich enough understanding of all the options to be able to make a comparison. It absolutely all gets noted down though and I will give all of the options extra attention as I come to some conclusion about what I plan to do.

A couple of things strike me about this variety of opinions – chief of which being that it can seem almost ideological in some ways. I’ve had people that I’ve barely finished describing the broad research question to swear up and down that their magic potion is the only one that will possibly cure my ailments. This is before I’ve even gotten down to what kind of data I think will be helpful or what my underpinning theories are.

Now I don’t question the sincerity of these people for a second and I even find it slightly touching that they can be so supportive and encouraging to a complete stranger. I’m sure that they’ve worked through any number of methods and learnt hard lessons about what works and what doesn’t and are keen to see other people spared some of those difficulties. It does seem though overall that once you’ve landed on a methodological tribe, that’s where you live. (But honestly, this is definitely supposition, I’m sure there’s more nuance than that – or at least I hope so).

If this is the way that things work, I can see positives and negatives. On the positive side, I would hope that pretty well any method or methodology can be valid if you can make a strong enough case for it. On the negative side, if there is an ingrained tribalism to a method and your reviewer lives in a different tribe, will you get the fairest hearing? Scholarship is meant to be grounded in objectivity but if a scholar has sunk part of their soul into a particular theory or a particularly approach to scholarship, might you not have to work a little harder if you choose a different angle?

Working out the angle in the first place is my real challenge. I have some ideas about where I’m going and what I want to explore, and I think there are some theories will inform this but I still feel that I’m very much in the unknown unknowns territory when it comes to methods. There was a mandatory research methods unit when I did my Masters way back when but at the time I had no intentions of moving into further research so I left it until last. Without seeing any particular application for the unit, I did the base level of work needed to finish it – actually I’m being ungenerous there, I still managed a Distinction – and promptly forgot everything.

There are research training opportunities available at my current uni but they are virtually entirely catered for on-campus, full-time students so it’s up to me to find my own way. It’s only recently that I’ve felt that I had a reasonable grasp on my topic so I’ve been happy to stay focused on the literature and put the how-to-research part on the back-burner. Which is all a very long-winded way of talking about why I’ve started the EdX EdinburghX SOCRMx Social Research Methods MOOC. From what I can see, this offers the overview of options that I need – they seem to favour creating one’s own bespoke set of methods, which suits my personal approach – and I’m hopeful that this will give me the focus that I’ve been lacking. I’ll obviously be keeping an eye out for the approaches that have already been commended to me, hopefully I’ll get a better picture of where exactly they sit on the map.

There’s a couple of other things that I’m already liking about this MOOC – there seems to be a manageable number of participants (~94 posts in the introduce yourself forum) and the MOOC moderators seem quite keen on the use of our own blogs for reflections and communication.

Oh and now I’m completely sold – I know this is pretty basic tool but this is essentially exactly what I’ve been looking for. They’ve used a multi-choice quiz to provide detailed feedback about methods that might suit particular research preferences. (Kind of like a buzzfeed quiz that isn’t harvesting your personal data for fun and profit). (All the answers are right)

mooc methods questions

There was also a nice video explainer of Epistemology – which I kind of knew was essentially about ways of knowing of but wasn’t clear why it mattered and perhaps also the nature of the different ways of knowing (e.g getting information from an authority figure vs experience/logic/science/tradition etc).

So yes, pretty happy with what I’ve seen so far here

Quick thoughts on two articles from 2013 about MOOCs

MOOC hype was really hitting its straps in 2013 and after a while it became pretty easy to predict what you would be reading in think piece such as the two that I’ve just been reading from The Conversation. (Reading these in 2016 offers some opportunities to evaluate the claims  on both sides with a little more perspective)

The articles are:

http://theconversation.com/from-moocs-to-harvards-will-online-go-mainstream-18093

http://theconversation.com/the-failure-of-udacity-lessons-on-quality-for-future-moocs-20416

While there are elements of nuance, they fall mostly respectively into the hyper/hater spectrum that is common of discussions in this space.

Things present that you’d expect to find:

Reference to the Gartner Hype cycle
Discussion of the high dropout rates of MOOCs
Assumptions that online delivery = cost savings (still? Really?)
Students are going to collectively abandon conventional universities any second now
Concerns from academics about job losses
Criticisms of simplistic pedagogy in MOOCs
Generalised criticisms of eLearning
Questioning the educational credentials of MOOC entrepreneurs

I won’t say that some of these questions and points raised aren’t valid but nowhere in either of these thinkpieces (and you know they’re thinkpieces because they don’t reference any other literature) are any objective facts. To be fair, a lot was unknown at that time – though it kind of still is, which raises another set of questions – about where MOOCs might go and speculation loves a vacuum.

It’s also very easy (and fun) for me to sit here and make fun of MOOC hypers and haters. For the record, I think MOOCs have a role to play in fostering interest in further learning but in my experiences of them over the last few years, I haven’t seen one that answers my questions about what they offer that a trip to the library or a graze over YouTube doesn’t.

Ok, that’s actually not true – Kevin Werbach’s Coursera MOOC on Gamification did some nice things with peer assessment that lifted it above the crowd and MITx’s MOOC on Implementing Education Technology was also valuable. It’s worth noting that both of these sit in discipline areas where you would expect something more from people teaching in the online space.

The rest of the MOOCs I’ve dabbled in, supported or seen hyped to the high heavens by higher ed high flyers have either missed the point or taken a 180 degree turn away from the initial – and far more interesting – philosophical approach underpinning MOOCs in the work done by Downes and Siemens in what are now referred to as cMOOCs.

The one new idea that I found in these posts was an attempt by the author of the first to coin a new acronym – HARVARD  (Highly Accessible (and Rigorous), Very Affordable (and Recognised) Degrees). I guess there was/is a gap in the literature for this concept and maybe its time simply hasn’t arrived yet but it feels like an overreach.

(I think it was an attempt to coin it anyway – I was curious whether it was just something that I’d previously missed or whether it has taken off anywhere and found exactly one hit for it in my searching)

MOOCs have somewhat disappeared from the conversation (small c) these days, data analytics sweeping in to enjoy a spot as flavour of the month for now. (Given the recent furore of privacy and the Australian Census and this kerfuffle though today, it might also be in for some interesting times.

Lodge, J. M. (n.d.). The failure of Udacity: lessons on quality for future MOOCs. The Conversation. Retrieved August 14, 2016, from http://theconversation.com/the-failure-of-udacity-lessons-on-quality-for-future-moocs-20416
Sharrock, G. (n.d.). From MOOCs to HARVARDs: will online go mainstream? The Conversation. Retrieved August 14, 2016, from http://theconversation.com/from-moocs-to-harvards-will-online-go-mainstream-18093

Week 4 of the 11.133x MOOC – Bringing it on home

The final week (ok 2 weeks) of the MITx – Implementation and Evaluation of Educational Technology MOOC – is now done and dusted and it’s time for that slight feeling of “what do I do now?” to kick in.

This final section focused on sound evaluation processes – both formative and summative – during and after your ed tech implementation. This whole MOOC has had a very smooth, organic kind of flow and this brought it to a very comfortable conclusion.

Ilona Holland shared some particularly useful ideas about areas to emphasise in the evaluation stage: appeal (engagement), interest (sparking a desire to go further), comprehension, pace and usability. She and David Reider clarified the difference between evaluation and research – largely that in an evaluation you go in without a hypothesis and just note what you are seeing.

In keeping with the rest of these posts, I’ll add the assignment work that I did for this final unit as well as my overall reflections. Spoiler alert though, if you work with educational technology (and I assume you do if you are reading this blog), this is one of the best online courses that I’ve ever done and I highly recommend signing up for the next one.


 

Assessment 4 – Evaluation process.

  1. Decide why you are evaluating. Is it just to determine if your intervention is improving learner’s skills and/or performance? Is it because certain stakeholders require you to?

We will evaluate this project because it is an important part of the process of implementing any educational technology. We need to be confident that this project is worth proceeding with at a larger scale. It will also provide supporting evidence to use when approaching other colleges in the university to share the cost of a site-wide license.

  1. Tell us about your vision of success for the implementation. This step is useful for purposes of the course. Be specific. Instead of saying “All students will now be experts at quadratic equations,” consider if you would like to see a certain percentage of students be able to move more quickly through material or successfully complete more challenging problems.

Our goal in using PollEverywhere in lectures is to increase student engagement and understanding and to reduce the number of questions that students need to ask the lecturer after the lecture.

A secondary goal would be to increase the number of students attending lectures.

Engagement seems like a difficult thing to quantify but we could aim for a 10% increase in average student grades in assessments based on lecture content. We could also aim for lecturers receiving 10% fewer student questions during the week about lecture content. A 20% increase in attendance also would be a success.

  1. Generate questions that will guide the evaluation. What do you need and want to know regarding the efficacy of your implementation? Are there questions that other stakeholders care about that should also be included? Think about your desired goals and outcomes for the implementation.

Questions for students:

I find lectures engaging
I am more likely to attend lectures now because of the use of PollEverywhere
I find PollEverywhere easy to use
PollEverywhere works reliably for me
The use of PollEverywhere feedback in lectures has helped deepen my understanding of the content

Questions for lecturers:

I have found PollEverywhere easy to use
PollEverywhere works reliably for me in lectures
PollEverywhere has helped me evaluate and adjust my lectures
Fewer students ask me questions between lectures since I started using PollEverywhere
Students seem more engaged now

  1. Determine what data and information you need to address the questions and how you will collect it.This could be qualitative or quantitative. You might consider observing teachers and students in action or conducting surveys and interviews. You might look at test performance, participation rates, completion rates, etc. It will depend on what is appropriate for your context.

Pre-use survey of students relating to engagement in lectures and their attitudes towards lectures
Observation of classes using PollEverywhere in lectures and student activity/engagement
Lecture attendance numbers?
Use of PollEverywhere near the end of lectures to gather student feedback
Comparison of assessment grade averages
Feedback from students in tutorials
University SELS (Student Experience of Learning Support) and SET (Student Experience of Teaching) surveys
Data derived directly from Poll results

  1. Consider how you would communicate results and explain if certain results would cause you to modify the implementation. In a real evaluation, you would analyze information and draw conclusions. Since your course project is a plan, we will skip to this step.

The quantitative data (changes in grades, results from polls in lectures, student surveys, attendance estimates) could be collated and presented in a report for circulation around the college. We could also make a presentation at our annual teaching and learning day – which could incorporate use of the tool.

Qualitative data could be built into case studies and a guide to the practical use of the tool.

Evidence emerging during the trial period could be acted on quickly by discussing alternatives with the pilot group and making changes to the way that the tool is used. This might include changing the phrasing of questions, requesting that students with twitter access use this option for responding to the poll or exploring alternative methods of displaying the PollEverywhere results (if PowerPoint is problematic)

Part 5: Reflection

What was difficult about creating your plan? What was easy?

Generally speaking, coming up with the plan overall was a fairly painless experience. The most complicated part was developing tools to identify and evaluate the most appropriate options. This was because the guest speakers gave me so many ideas that it took a while to frame them in a way that made sense to me and which offered a comprehensive process to work through. (This ended up being 3-4 separate documents but I’m fairly happy with all of them as a starting point).

As with all of the activities, once I had discovered the approach that worked for me and was able to see how everyone else was approaching the question, things seemed to fall into place fairly smoothly.

What parts of the course were the most helpful? Why? Did you find certain course materials to be especially useful?

I think I have a fairly process oriented way of thinking – I like seeing how things fit together and how they relate to the things that come before and after. So the sections that dug down into the detail of processes – section 2 and section 4 with the evaluation plans – appealed the most to me.

I can understand the majority of people working with education technology are in the K-12 area and so it makes sense this is where many of the guest experts came from but this did sometimes seem slightly removed from my own experiences. I had to do a certain amount of “translating” of ideas to spark my own ideas.

What about peer feedback? How did your experiences in the 11.133x learning community help shape your project?

Peer feedback was particularly rewarding. A few people were able to help me think about things in new ways and many were just very encouraging. I really enjoyed being able to share my ideas with other people about their projects as well and to see a range of different approaches to this work.

General observations

I’ve started (and quit) a few MOOCs now and this was easily the most rewarding. No doubt partially because it has direct relevance to my existing work and because I was able to apply it in a meaningful way to an actual work task that happened to come up at the same time.

I had certain expectations of how my project was going to go and I was pleased that I ended up heading in a different direction as a result of the work that we did. This work has also helped equip me with the skills and knowledge that I need to explain to a teacher why their preferred option isn’t the best one – and provide a more feasible alternative.

While it may not necessarily work for your EDx stats, I also appreciated the fact that this was a relatively intimate MOOC – it made dealing with the forum posts feel manageable. (I’ve been in MOOCs where the first time you log in you can see 100+ pages of Intro posts and this just seems insurmountable). It felt more like a community.

I liked the idea of the interest groups in the forum (and the working groups) but their purpose seemed unclear (beyond broad ideals of communities of practice) and after a short time I stopped visiting. (I also have a personal preference for individual rather than group work, so that was no doubt a part of this)

I also stopped watching the videos after a while and just read the transcripts as this was much faster. I’d think about shorter, more tightly edited videos – or perhaps shorter videos for conceptual essentials mixed with more conversational case-study videos (marked optional)

Most of the events didn’t really suit my timezone (Eastern Australia) but I liked that they were happening. The final hangout did work for me but I hadn’t had a chance to work on the relevant topic and was also a little caught up with work at the time.

All in all though, great work MOOC team and thanks.

(I also really appreciated having some of my posts highlighted – it’s a real motivator)

Week 2 of the 11.133x MOOC – getting things done. (Gradually)

The second week (well fortnight really) of the 11.133x MOOC moved us on to developing some resources that will help us to evaluate education technology and make a selection.

Because I’m applying this to an actual project (two birds with one stone and all) at work, it took me a little longer than I’d hoped but I’m still keeping up. It was actually fairly enlightening because the tool that I had assumed we would end up using wasn’t the one that was shown to be the most appropriate for our needs. I was also able to develop a set of resources (and the start of a really horribly messy flowchart) that my team will be able to use for evaluating technology down the road.

I’m just going to copy/paste the posts that I made in the MOOC – with the tools – as I think they explain what I did better than glibly trying to rehash it here on the fly.


 

Four tools for identifying and evaluating educational technology

I’ve been caught up with work things this week so it’s taken me a little while to get back to this assignment but I’m glad as it has enabled me to see the approaches that other people have been taken and clarify my ideas a little.

My biggest challenge is that I started this MOOC with a fairly specific Ed Tech project in mind – identifying the best option in student lecture instant response systems. The assignment however asks us to consider tools that might support evaluating Ed Tech in broader terms and I can definitely see the value in this as well. This has started me thinking that there are actually several stages in this process that would probably be best supported by very different tools.

One thing that I have noticed (and disagreed with) in the approaches that some people have taken has been that the tools seem to begin with the assumption that the type of technology is selected and then the educational /pedagogical strengths of this tool are assessed. This seems completely backwards to me as I would argue that we need to look at the educational need first and then try to map it to a type of technology.

In my case, the need/problem is that student engagement in lectures is low and a possible solution is that the lecturer/teacher would like to get better feedback about how much the students are understanding in real time so that she can adjust the content/delivery if needed.

Matching the educational need to the right tool

When I started working on this I thought that this process required three separate steps – a flowchart to point to suitable types of technology, a checklist to see whether it would be suitable and then a rubric to compare products.

As I developed these, I realised that we also need to clearly identify the teacher’s educational needs for the technology, so I have add a short survey about this here, at the beginning of this stage.

I also think that a flowchart (ideally interactive) could be a helpful tool in this stage of identifying technology. (There is a link to the beginning of the flowchart below)

I have been working on a model that covers 6 key areas of teaching and learning activity that I think could act as the starting point for this flowchart but I recognise that such a tool would require a huge amount of work so I have just started with an example of how this might look. (Given that I have already identified the type of tool that I’m looking at for my project, I’m going to focus more on the tool to select the specific application)

I also recognise that even for my scenario, the starting point could be Communication or Reflection/Feedback, so this could be a very messy and large tool.

The key activities of teaching/learning are:
• Sharing content
• Communication
• Managing students
• Assessment tasks
• Practical activities
• Reflection / Feedback

I have created a Padlet at http://padlet.com/gamerlearner/edTechFlowchart and a LucidChart athttps://www.lucidchart.com/invitations/accept/6645af78-85fd-4dcd-92fe-998149cf68b2 if you are interested in sharing ideas for types of tools, questions or feel like helping me to build this flowchart.

I haven’t built many flowcharts (as my example surely demonstrates) but I think that if it was possible to remove irrelevant options by clicking on sections, this could be achievable.

Is the technology worthwhile?

The second phase of this evaluation lets us look more closely at the features of a type of technology to determine whether it is worth pursuing. I would say that there are general criteria that will apply to any type of technology and there would also need to be specific criteria for the use case. (E.g. for my lecture clicker use case, it will need to support 350+ users – not all platforms/apps will do this but as long as some can, it should be considered suitable)

Within this there are also essential criteria and nice-to-have criteria. If a tool can’t meet the essential criteria then it isn’t fit for purpose, so I would say that a simple checklist should be sufficient as a tool will either meet a need or it won’t. (This stage may require some research and understanding of the available options first). This stage should also make it possible to compare different types of platforms/tools that could address the same educational needs. (In my case, for example, providing physical hardware based “clickers” vs using mobile/web based apps)

This checklist should address general needs which I have broken down by student, teacher and organisational needs that could be applied to any educational need. It should also include scenario specific criteria.

Evaluating products
It’s hard to know exactly what the quality of the tool or the learning experiences will be. We need to make assumptions based on the information that is available. I would recommend some initial testing wherever possible.
I’m not convinced that it is possible to determine the quality of the learning outcomes from using the tool so I have excluded these from the rubric.
Some of the criteria could be applied to any educational technology and some are specifically relevant to the student response / clicker tool that I am investigating.


 

Lecture Response System pitch

This was slightly rushed but it does reflect the results of the actual evaluation that I have carried out into this technology so far. (I’m still waiting to have some questions answered from one of the products)

I have emphasised the learning needs that we identified, looked quickly at the key factors in the evaluation and then presented the main selling points of the particular tool. From there I would encourage the teacher/lecturer to speak further to me about the finer details of the tool and our implementation plan.

Any thoughts or feedback on this would be most welcome.

Edit: I’ve realised that I missed some of the questions – well kind of.
The biggest challenge will be how our network copes with 350+ people trying to connect to something at once. The internet and phone texting options were one of the appealing parts about the tool in this regard, as it will hopefully reduce this number.

Awesome would look like large numbers of responses to poll questions and the lecturer being able to adjust their teaching style – either re-explaining a concept or moving to a new one – based on the student responses.


 

These are the documents from these two assignments.

Lecture Response systemsPitch ClickersEdTechEvaluationRubric EducationalTechnologyNeedsSurvey ColinEducation Technology Checklist2

Week 1 of the 11.133x MOOC – tick.

MOOCs often take a little while to ramp up but the MITx 11.133x Implementation yada yada – I’m going to stick with 11.133x for now, that’s a long title – MOOC feels like it’s just about right.

There has been a fair whack of the standard common sense in the videos so far – have a purpose, don’t choose the technology before you know what you want to do with it. stakeholders matter etc – but it has been well presented and by a range of different people.

There has probably been more emphasis on looking at ed tech for K-12 schools rather than higher education than I like but I guess it is a larger chunk of the audience. The ability to form/join affinity groups in the forums has at least let me connect with other uni people around the world.

In terms of practical activities, it has really REALLY helped to come to this MOOC with a project in mind. I’m looking for a live student response/feedback tool (most likely web/app based) that can be used in lectures (large lectures 350+) to poll students about their understanding of content.

This fed well into our first two activities, which involved looking at the context that this project will occur in and considering whether it sits in a general or specific domain and whether it will change procedure or instruction. (I’ll post my responses to both below)

Responding to other posts – you need to respond to at least three to complete the module – helps to clarify some of the concepts. I have a feeling that this isn’t a huge MOOC either – there aren’t hundreds of pages of responses in the forums to each question which is often kind of hellish to process.

Profile your implementation context

Target environment
I work in the College of Business and Economics in a leading Australian university. We’re relatively well resourced, so buying new tech generally isn’t an issue within reason, which allows me to focus on the suitability of the tool. We have large numbers of international students in our undergraduate cohort. The majority of students are comfortable with mobile and online technology. At an undergraduate level, the students tend to be young adults.

The college is comparatively conservative in some ways – although fortunately our leadership understands and supports the value of innovation. There is an emphasis placed on the seriousness and prestige of our brand that I need to factor into the look and feel of college associated tools.
There is a range of acceptance and engagement with learning technology from academics in the college, from enthusiasm to resistance to change. (Last week I had a long conversation with someone about why he still needs an overhead projector – we’re getting him one)
Our largest lecture theatres can hold up to 600 people (which is big for us) and the college wi-fi has recently been upgraded.

Key stakeholder

Recently one of our finance lecturers contacted me – I’m the learning technology person for the college – and asked what we have in the way of live student response/feedback systems. Tools that will enable her to post survey/understanding questions on screen during lectures and get real-time responses from students via mobile/web apps.

She is relatively new to the college and lectures to a group of 350+ students. (This is relatively large for us although some of our foundation subjects have 800+ students). She is keen to enhance the interactivity of her lectures but is also concerned about finding the right tool. She really doesn’t want any technology failures during her lectures as she believes that this will kill student trust in this kind of technology. She would also prefer not to trial multiple tools on her students as she is concerned that experimenting on students may diminish their educational experience.

Potential for the technology
There has been a lot of ongoing discussion at the university in recent years about the effectiveness of lectures. Attendance rates are around 30% in many disciplines, due to student work/life commitments, recording of lectures and a host of other reasons.

The lecture format itself is questioned however it is deeply ingrained in many parts of the culture so finding ways to augment and enhance the lecture experience seems like a more effective approach.
Student response/feedback apps can be a powerful way to instantly track understanding and engagement of material in lectures and I am keen to see what we can do with it. While some students may feel confident to ask questions in a lecture, others may feel uncomfortable with this from cultural perspectives or due to English being a second language.

The lecturer has already been in contact with a supplier of a particular platform, however I have some reservations as on a preliminary investigation, their product appears to provide much more functionality than might be needed and may be unnecessarily complicated. However, I’m hoping that this MOOC will help me to work through this process.

Domain / Approach Chart

Socrative

This seems like a bit of a cop-out given that the example given was PollEverywhere but if you check myprevious post, you’ll see that I’m looking for a tool to use for live student feedback in lectures.

Socrative is one of several tools that considering to meet this need. It is a basic, online tool that enables a teacher to create a quiz/survey question, show it to the class through a data projector and then get the students to respond to (generally multichoice) via an app on their phone or a web browser.

Of the ones that I’ve seen so far, it’s easy to set up and seems to work fairly well. (I blogged a comparison between it and Kahoot a while ago)

I’d say that it is Domain General because it can be used widely and it is more about changing an approach to procedure, because without it, a teacher could just ask for a show of hands instead. (This I think will get a better response though because it is less embarrassing)

My main concern with Socrative for my project is that the website says that it is best used with classes of 50 or less and I am looking for something that supports 350+

Wrapping up week 0 of the 11.133x MOOC

So far so good in the MITx 11.133x Implementation and Evaluation of Educational Technology MOOC – I’ve just finished Week 0 which I guess is like the O (Orientation) Week of MOOCs.

A bit of “here’s how to navigate the platform”, a bit of “introduce yourself” but also a dip of the toe into reflective practice with a small forum post about an experience we have had with educational technology. I dipped into a recent story about me designing a course in Moodle that I thought was cool but quickly realised that I had designed it for me and not the needs of my learners. We then made 3 responses to other posters.

The three required responses were simple enough but I couldn’t resist responding to an additional post that may not have made me so many friends. Tell me what you think – was  I unreasonable?

This is the original post:

In 1989 I was teaching ethics at a liberal arts college in the U.S. A friend and colleague was teaching ethics the same semester at one of the military service academies in the U.S. We decided to create a Usenet newsgroup for our 2 sets of students and require them to interact with each other to collaborate on assignments and discuss readings.

It had mixed success. Many of the students had not previously used a computer and, worse, believed that they were entitled to take an ethics class without having to use a computer. So a lot of the students were grumpy and resentful about what they considered to be a frivolous, extraneous and irrelevant requirement (you can probably imagine what the teaching evaluations looked like). Some students really enjoyed it and became avid collaborators and participants, but others just groused all the way through it and gave us really bad teaching evaluations.

Our intention was to have our respective students explore various ethics topics with other students who were very different demographically from their classmates. It was also to get them to use computers to interact with others at a distance (remember — this pre-dated the World Wide web or Listserv, and Windows did not have much of a GUI, as I recall).

For those students who jumped in and ran with it, I think it went quite well. Unfortunately there was also a large number of students at both institutions who resisted it all the way, and that made for a difficult classroom dynamic for both of us. Many students were downright angry that they were required to use computers in an ethics class (they believed computers were only appropriate in math and science classes) and gave us really devastating teaching evaluations that, in part, led to both of our departures from our respective institutions.

There was a selection of short, sympathetic responses to this and then mine:

Thanks for sharing your story – sorry to hear that it pushed you away from that teaching job.

I’m going to go against the flow here a tiny bit and sympathise a little with your students. Was it clear in the course outline that there would be an online component? Did you explain to the students how using Usenet would enhance their learning experience? (Did it provide a better experience than could have been provided if the course was wholly face-to-face?)

Obviously I’m here in this MOOC because I believe that technology and digital literacy are vitally important in education but I also believe that the education part has to be the prime focus.

If students (as autonomous adult learners) were signing up for an ethics class and then were suddenly told that they needed to learn computer skills, I’m not surprised that they were unhappy. They were suddenly taken to a place where their ignorance was on public display and they had lost a degree of control of their education.

I have no doubt that you acted from the best of intentions but this story speaks to me a lot about the need to bring our students (or the teachers that we support in my case) along with us on the journey and that they have to believe that we are meeting their needs/interests foremost.

I think it’s all well and good to use technology in teaching and learning (obviously) but we need to be mindful about how much we are designing a course for ourselves vs our students.

Was I wrong?