Category Archives: education

Week 2 of the 11.133x MOOC – getting things done. (Gradually)

The second week (well fortnight really) of the 11.133x MOOC moved us on to developing some resources that will help us to evaluate education technology and make a selection.

Because I’m applying this to an actual project (two birds with one stone and all) at work, it took me a little longer than I’d hoped but I’m still keeping up. It was actually fairly enlightening because the tool that I had assumed we would end up using wasn’t the one that was shown to be the most appropriate for our needs. I was also able to develop a set of resources (and the start of a really horribly messy flowchart) that my team will be able to use for evaluating technology down the road.

I’m just going to copy/paste the posts that I made in the MOOC – with the tools – as I think they explain what I did better than glibly trying to rehash it here on the fly.


 

Four tools for identifying and evaluating educational technology

I’ve been caught up with work things this week so it’s taken me a little while to get back to this assignment but I’m glad as it has enabled me to see the approaches that other people have been taken and clarify my ideas a little.

My biggest challenge is that I started this MOOC with a fairly specific Ed Tech project in mind – identifying the best option in student lecture instant response systems. The assignment however asks us to consider tools that might support evaluating Ed Tech in broader terms and I can definitely see the value in this as well. This has started me thinking that there are actually several stages in this process that would probably be best supported by very different tools.

One thing that I have noticed (and disagreed with) in the approaches that some people have taken has been that the tools seem to begin with the assumption that the type of technology is selected and then the educational /pedagogical strengths of this tool are assessed. This seems completely backwards to me as I would argue that we need to look at the educational need first and then try to map it to a type of technology.

In my case, the need/problem is that student engagement in lectures is low and a possible solution is that the lecturer/teacher would like to get better feedback about how much the students are understanding in real time so that she can adjust the content/delivery if needed.

Matching the educational need to the right tool

When I started working on this I thought that this process required three separate steps – a flowchart to point to suitable types of technology, a checklist to see whether it would be suitable and then a rubric to compare products.

As I developed these, I realised that we also need to clearly identify the teacher’s educational needs for the technology, so I have add a short survey about this here, at the beginning of this stage.

I also think that a flowchart (ideally interactive) could be a helpful tool in this stage of identifying technology. (There is a link to the beginning of the flowchart below)

I have been working on a model that covers 6 key areas of teaching and learning activity that I think could act as the starting point for this flowchart but I recognise that such a tool would require a huge amount of work so I have just started with an example of how this might look. (Given that I have already identified the type of tool that I’m looking at for my project, I’m going to focus more on the tool to select the specific application)

I also recognise that even for my scenario, the starting point could be Communication or Reflection/Feedback, so this could be a very messy and large tool.

The key activities of teaching/learning are:
• Sharing content
• Communication
• Managing students
• Assessment tasks
• Practical activities
• Reflection / Feedback

I have created a Padlet at http://padlet.com/gamerlearner/edTechFlowchart and a LucidChart athttps://www.lucidchart.com/invitations/accept/6645af78-85fd-4dcd-92fe-998149cf68b2 if you are interested in sharing ideas for types of tools, questions or feel like helping me to build this flowchart.

I haven’t built many flowcharts (as my example surely demonstrates) but I think that if it was possible to remove irrelevant options by clicking on sections, this could be achievable.

Is the technology worthwhile?

The second phase of this evaluation lets us look more closely at the features of a type of technology to determine whether it is worth pursuing. I would say that there are general criteria that will apply to any type of technology and there would also need to be specific criteria for the use case. (E.g. for my lecture clicker use case, it will need to support 350+ users – not all platforms/apps will do this but as long as some can, it should be considered suitable)

Within this there are also essential criteria and nice-to-have criteria. If a tool can’t meet the essential criteria then it isn’t fit for purpose, so I would say that a simple checklist should be sufficient as a tool will either meet a need or it won’t. (This stage may require some research and understanding of the available options first). This stage should also make it possible to compare different types of platforms/tools that could address the same educational needs. (In my case, for example, providing physical hardware based “clickers” vs using mobile/web based apps)

This checklist should address general needs which I have broken down by student, teacher and organisational needs that could be applied to any educational need. It should also include scenario specific criteria.

Evaluating products
It’s hard to know exactly what the quality of the tool or the learning experiences will be. We need to make assumptions based on the information that is available. I would recommend some initial testing wherever possible.
I’m not convinced that it is possible to determine the quality of the learning outcomes from using the tool so I have excluded these from the rubric.
Some of the criteria could be applied to any educational technology and some are specifically relevant to the student response / clicker tool that I am investigating.


 

Lecture Response System pitch

This was slightly rushed but it does reflect the results of the actual evaluation that I have carried out into this technology so far. (I’m still waiting to have some questions answered from one of the products)

I have emphasised the learning needs that we identified, looked quickly at the key factors in the evaluation and then presented the main selling points of the particular tool. From there I would encourage the teacher/lecturer to speak further to me about the finer details of the tool and our implementation plan.

Any thoughts or feedback on this would be most welcome.

Edit: I’ve realised that I missed some of the questions – well kind of.
The biggest challenge will be how our network copes with 350+ people trying to connect to something at once. The internet and phone texting options were one of the appealing parts about the tool in this regard, as it will hopefully reduce this number.

Awesome would look like large numbers of responses to poll questions and the lecturer being able to adjust their teaching style – either re-explaining a concept or moving to a new one – based on the student responses.


 

These are the documents from these two assignments.

Lecture Response systemsPitch ClickersEdTechEvaluationRubric EducationalTechnologyNeedsSurvey ColinEducation Technology Checklist2

Two great presentations from iMoot – Jim Judges and Gina Veliotis

The iMoot online Moodle conference was held recently and while the overall standard of presentations was quite high, there were two stand-outs for me.

Jim Judges from the University of Warwick in the UK was probably the presenter of the conference for me – he was eminently comfortable in the online presentation space and kept the audience engaged with his Confessions of a Moodle Trainer. It is a 50 minute recording but well worth the time.

Gina Veliotis from Sydney presented on material that was fresher to me but which gave me a lot to think about in her presentation on Design Thinking to create Moodle Magic. (Again, around 50 minutes but worth the time)

Try-a-tool challenge Week 3 – LessonPaths and Blendspace

This challenge is about a couple more content curation tools – LessonPaths and Blendspace.

On first glance they don’t seem as rich or interactive as Ted.Ed but I’ll see what I’m able to put together with them.

Here is the explanatory video from EmergingEdTech.com about LessonPaths

In practice, LessonPaths was simple enough to use but not educationally inspiring. It lets you create – or rather curate – a playlist of online resources including weblinks, your own documents, your own created HTML pages and a basic true/false or multichoice quiz.

The weblinks embed in the tool (which I thought was frowned up on web design terms), the documents are also embedded but sit quite nicely, the HTML editor is basic, allowing text and images and the quiz has a nice interface but only provides the most basic of feedback (and no option for custom feedback)

There is an option provided to embed the lesson elsewhere but this just provides a sliderbox with links to each section that open a new window in the LessonPaths site.

In fairness, LessonPaths seems targeted more at a primary school level user and I’m looking at this from a higher ed perspective. While it is easy enough to use and visually acceptable, I don’t think it offers a particularly rich learning experience.

The lesson that I created can be found at http://www.lessonpaths.com/learn/i/adding-video-to-moodle/test-your-video-knowledge

This is the Blendspace overview from EmergingEdTech.com

Blendspace appears to come from more educationally minded developers – they are mindful of grading and tracking student progress and provide options to search a range of education focused sites in the tool. Ultimately it is still a content curation tool.

It does have some other nice features including the ability to add HTML source code to the webpages you can create (I was able to embed the LessonPaths lesson that I just created to one section), you can link your Dropbox and Google Drives to the tool making it easy to import content from there and Blendspace also provides a browser plugin that enables you to bookmark URLs directly to your Blendspace account. You can also search Flickr, Educreations and Gooru (not familiar with the last two) directly from the tool.

The interface is elegantly simple and very drag-and-drop oriented.

Users can create accounts either as teachers or students and teachers can generate course codes so that student progress (comments and likes/dislikes on resources and answers to quiz questions) can be tracked.

The resource that I created using Blendspace can be found at https://www.blendspace.com/lessons/-qyOYJGqrPOwiQ/uploading-grades-to-moodle

As I mentioned, both are relatively simple tools lacking deep interactivity but might be useful in creating more stimulating resource collections than a typical LMS file repository. In terms of understanding and supporting educators, Blendspace is streets ahead of LessonPaths.

 

Try-a-tool challenge Week 2 – ed.ted.com

The last couple of months a little hectic, with wrapping up one job and starting another (I’m now in the College of Business and Economics (CBE) at the Australian National University (ANU)) and so I have some catching up to do with this challenge but I think I’m up to the task. (Even if they are currently on around Week 9?)

This challenge – from the emergingtech.com blog – is about using the TedEd tools on the ed.ted.com website. (This is the same ted.com that hosts the TED talks)

Here is a quick 3 minute overview from  Emerging Ed Tech that sums up the TedEd web tool quite nicely.

In a nutshell though, it’s an easy to use web based tool that enables teachers to create a small lesson driven by a YouTube video that can also include reflection/understanding questions, further resources and a discussion forum.

Students need to register to participate in activities (questions and discussion forum) but this means that the teacher is able to give them feedback and respond to their discussion posts.

The teacher is able to choose which of the Think / Dig Deeper / Discuss / And finally sections to include (the ability to reorder them might be nice but this is a minor quibble) and the whole lesson creation process only took me around 5 minutes.

(You can find the lesson that I created at http://ed.ted.com/on/4VwXnIwo )

The Think section supports either open answer text or multi-choice questions (up to 15), Dig Deeper offers a basic text editor with support for weblinks and the Discuss forum is simple but cleanly designed and easy to use. It has no text formatting or options for attaching files – however I was able to use HTML tags to format text and add an image. Entering a URL does automatically create a link though, which is nice and there are options to flag or upvote other posts.

TedEd also provides the requisite social media links and lessons can either be set to public or privately listed. (accessible only if you have the direct URL)

All in all this is a very nice, easy to use tool and I could see a range of uses for it. It would be possible to replicate this kind of resource using the existing tools in Moodle however not as simply or cleanly. I would seriously consider having students use it to create their own resources for formative peer-teaching activities in a seminar based approach.

Rejigging professional development training for teachers – Sarah Thorneycroft

Sarah Thorneycroft (@sthcrft) from the University of New England (Australia) often impresses with her thoughtful presentations about dragging academics and teaching staff into the 21st century when it comes to professional development in online teaching and learning.

This paper that she presented at Ascilite 2014 showcases her work in shifting from conventional workshops to webinars and “coffeecourses”. Teachers being teachers, the results were mixed (why can’t we learn about online teaching in a face to face workshop) but the signs are encouraging nonetheless.

The video runs to 19:23 but is well worth checking out.

Final thoughts on DDLR / DDeLR

It feels like I’ve been banging on about this subject forever now but with assessments now finalised, it seems like a good time for a final wrap up.

In broad terms, I was a little disappointed with my students. It might have been a bad time of year to run this subject, with its demanding workload, but the majority of them seem to have only put in the absolute barest effort needed to pass. Assessment instructions which I thought were pretty clear weren’t followed and most of the reflections lacked any great insight. I had to ask many of them to rework and resubmit their assessments just to meet the minimum requirements.

What this does make me ask is whether this is the fault of my students or me.

As I haven’t taught formal classes for more than a decade, there are a lot of things that I haven’t had to deal with in teaching an ongoing subject with rigorous formal assessment. I also have a tendency at times to over-complicate things because it just seems like it makes them better. This probably also extends to my communication with my students and my expectations of them.

Fortunately, I am still keen to try this again.

Even during the marking process, as I had to walk away from the computer and swear at the walls, I was constantly reshaping the course structure, the assessments and the class activities in my mind to help avoid some of the issues that were arising. The fact that a handful of the “good” students were able to understand and follow my instructions suggests that I’m on the right track at least and am not entirely to blame but the fact that more than a few got things quite wrong does tell me that there is more work to be done.

I need to make it clearer that when students are creating draft learning resources, they actually need to be resources – things, objects – rather than broad and loose activity plans for a class. I need to explain clearly that the final learning resources should be the same as the draft learning resources but improved based on testing and feedback.  To be honest, these things seemed so self evident to me that I couldn’t conceive of anyone not getting it but there we are.

I tried to put into practice a number of ideas that I’ve encountered in the education design community about getting students more involved in designing parts of their own assessments but this really just confused more people than it helped. (Which was a shame as I do believe that it is a valid and valuable approach)

I tried to give my learners freedom to follow their particular learning needs and interests but for the most part this ended up just giving them the opportunity to follow the path of least resistance and allowed for some fairly lazy work. I also should’ve factored into my thinking that the first week of a class is often going to be plagued by technical (logins not working) and administrative hassles and try to make allowances for this in not expecting too much work to be achieved in the first week. (That said, we have a strong need to demonstrate engagement in class activities to receive funding for students that later drop out and I was certainly able to prove that)

I think next time around there will need to be a little less freedom, a bit more structure and lot more clarity and simplicity.

On the whole I am happy that I have managed to get these teachers doing things they haven’t done before and I think they have developed useful skills and knowledge. I’d just like to do more.

Delivering DDLR & DDeLR: Reflections

Life got quite busy in the last few weeks, so screenface had to go on the back-burner for a little while. I think it’s worth taking a look at what happened with the DDLR & DDeLR (Design & Develop Learning/eLearning Resources) subjects and what I might do with them next time.

What happened?

The majority of teachers taking the DDLR subjects have a reasonable expectation that this is a class where they will be able to develop some rich skills in using our eLearning platform to make new things for their students.

The units and elements of competency however are heavily focused on a design and development process for learning resources. The assessments for the teachers (who are the students in this case) hinge on providing evidence that they have considered the characteristics of their cohorts and mapped out a plan for whatever resource they are building. (This should include documenting necessary materials, sources of support and risk planning possible contingencies that may arise). They then need to create the resource, test it with peers or students and make refinements to it before final implementation.

All in all, sensible practice and (I assume) something that most teachers already do as a matter of course in their teaching practice. (Whether or not they formally name the steps in the process is another matter)

What the units and elements of competency don’t particularly care about is what the teachers learn about in terms of usability, readability, general design principles and, of course, the use of a range of new technological tools to get it all done. (Which is what they are most interested in addressing)

So we already have tensions built into the subject in the conflict between what the teachers want and need and what they have to demonstrate and be assessed on.

While we started with a full house of 14 people on the first day, numbers quickly dwindled to a dedicated core of 6 – 7. (A number of factors came to play here including personal issues for a couple of the cohort and running this subject at the very end of semester, when these teachers are themselves inundated with their own grading and teaching responsibilities)

For those that remained, we were able to provide what I hope was an engaging range of activities and training in design principles for usability, copyright and the use of our eLearning platform. (I was well supported by a member of my team – Jo – who also kindly filled in for me when I was away).

Assessment items have been slow to come in – possibly due to the onerous nature of evidence requirements for the subjects. Learners are required to provide 4 draft learning resources (with accompanying design documents and student group profiles) of which 2 are then tested and refined into final learning resources.

I tried to streamline this process in the first week by having the class work on a draft learning resource in the first week – a checklist that might be used to test the quality of their other learning resources. There has been a fair amount of confusion about this and I need to consider whether it is worth trying again and also how I go about explaining the concept.

The idea was to get the class thinking about important qualities in their learning resources and also to get some more buy-in in their own assessments, by effectively designing part of their grading tool. (This is not a graded subject but my intention was that by having them use their learning resource checker on their other resources, they would be more mindful of issues relating to pedagogy, content and technology.

What have I learned?

I need to lower my expectations of what can be achieved in the first lesson. We were beset with technical and enrollment questions that disrupted my carefully planned series of tasks and activities.

I had also put too much faith in the technical skills of the cohort and their ability to effectively use our LMS. I tried to do too many clever things – setting up conditional release on activities so that the learners could only access certain activities or resources after completing others.

I didn’t provide sufficient information about how the class might submit assessment items which were from their own development courses in our LMS. The assessments were set up as an assignment dropbox to receive files. I ended up telling people to create a word document with some screenshots and a link to the resources that they had created but this should have been explicitly stated in the assessment instructions.

I am happy that I was able to be flexible enough with the course to ask the learners what tools they were most interested in learning about and reshaping the course to accommodate this. A core principle of adult learning is that adults need to see the value in what they are being taught and this was an easy way to achieve this.

I’ve been able to speak to the previous teacher of this subject and she also struggled with a number of these issues – hopefully input from a wider group of colleagues might offer some solutions.

 

 

 

Designing DDLR & DDeLR – (Over)Structuring activities

I’ve been a bit caught up preparing for this course and consequently this post has been sitting in the draft section for a while now. I ran the first class last Friday (17/10) and it seems like a good idea to share some reflections.

I’m going to leave the pre-class post up as an interesting contrast.

(Before running the class)

As I continue to work on the Design and Develop Learning Resources and Design and Develop eLearning Resources subject (can anyone explain why an eLearning resource should not just be folded into an expanded definition of Learning resource?), I am now at the point where I need to work out what we will do each week.

Previous work on this has led to the development – well, adaptation really – of an assessment structure that should hopefully work well. I’m trying to incorporate as much assessment into in-class activities as possible and also get the learners to take ownership of some of their assessment by having them design the assessment criteria (while still ensuring that all the necessary assessment items are addressed). This also lets us get a flying start on the process of learning about designing and developing resources by working together on one in class. I’m thinking that using a TPACK (Technology, Pedagogy and Content Knowledge) framework to evaluate learning resources seems like a solid base at this point.

The course as it has been delivered previously seems like a very rich opportunity for our teachers to learn about using our LMS (Moodle – called eLearn here) but the more I look at the elements of competency, the more I have to wonder how relevant some of the material really is. Refocusing the course on designing and developing learning resources will have to be a priority. Topics on designing assessments and forum activities and using our learning object repository are undoubtably valuable but not relevant in this specific instance.

(After the class)

One of the things about having a more theoretical approach to teaching is that it can be very easy to get excited about trying a load of new things and using a lot of ed tech (Moodle to be precise) without really thinking through the limitations of the class.

I spend a lot of time researching approaches to teaching with technology and providing 1-to-1 support for teachers at their desks. I also run semi-regular workshops for small groups of teachers about using specific tools. What I haven’t done is taught a full subject in a proper class setting over a number of weeks – well not in the last ten years anyway.

The first week is always going to be a little bumpy – learners turning up to class who haven’t enrolled yet (or properly) and thus have no access to our eLearning platform. The other thing I sometimes forget – but really shouldn’t – is that few teachers have the same level of skill, enthusiasm or experience in using our LMS as I do. So designing the lesson for Week 1 as primarily a series of sequential activities in Moodle in the first week is probably not the ideal approach. Actually, there’s no probably about that.

Furthermore, getting learners to use new online tools that seem perfectly straightforward (Padlet) can and will take much longer than anticipated.

On top of this, I decided that it would be fun to try to gamify the course. Not hugely but using a 12 sided die to randomise the process of calling on learners to answer questions and making use of the activity restriction function in Moodle (you can’t see one activity until you complete the previous one) really does complicate an already messy session unnecessarily.

Something else that I’d decided (based on sound pedagogical principles) was that getting the students to create a resource that can be used to identify criteria in their assessment would be a useful way to engage them with the content and get them to think more meaningfully about what is important in designing and developing learning resources. On reflection, I guess creating a resource that can be used to measure the quality of other created resources gets a little meta and might be overly complicated. I should’ve also considered that these teachers would be far more interested in developing workable resources for their own students and not for themselves and their classmates.

All in all, I think I tried to do too much, too cleverly and expected far more of the students than I should’ve. I should’ve made more allowances for lower levels of e-learning and digital literacy and factored in the necessary messiness of getting everyone started.

So now I need to simplify and streamline this course. Several of the activities were successful and we did have a reasonably meaningful and deep discussion about what is important to consider in the process of designing learning resources, so I don’t consider the class to be a total wash. We also were able to identify specific learning resources that the students are interested in learning about – several of which (marking rubrics) were nowhere on my list of things to cover in this course.

So it’s back to it, I guess.

 

Designing DDLR – More work on assessment

Now the focus of this project on Designing the Design & Develop Learning Resources course is on pinning down the assessments. J’s assessments for DDLR 3&4 seem strong but I just want to see whether it’s possible to streamline them slightly – largely to allow learners to knock over the analysis (and design) components quickly. (Given that they should presumably have a decent idea what their students are already like and already design resources with this in mind)

After a couple of hours of looking over this, I’m wondering whether it mightn’t have been better to try to write up my own assessment ideas first and then look at J’s for additional inspiration. It’s quite difficult to look past the solid work that has already been done. I’m still mindful of the fact that the amount of documenting and reporting seems a little high and am trying to find ways to reduce this while still ensuring that the learner addresses all of the elements of competency.

One of the bigger hurdles I face with this combined subject is that the elements of the units of competency are similar but not the same. For the analysis and design sections, they match up fairly well, with only mild changes in phrasing but the development, implementation and evaluation components start to differ more significantly. Broadly speaking, both of these units of competency appear to be targeted more at freelance education designers than practicing teachers – the emphasis on talking to the client and checking designs with the client (when the teacher would clearly be their own client) requires some potentially unnecessary busy work for the teacher wanting to be deemed competent here.

I’ve tried to address the differences between the elements of competency by clustering them with loosely matching ones from the other unit of competency in this fairly scrappy looking document. I’ve also highlighted phrases that look more like deliverable items.

document listing elements of competencyThis made it much easier to look over the existing assessment documents and resources to firstly check that all of the elements were addressed and secondly to feel confident that I am sufficiently across what is required in this subject.

Broadly speaking, the existing assessment items cover these elements of competency pretty well, I only needed to add a few extra questions to the design document template to address some aspects that it might be possible for learners to overlook.

These questions are:

  • How does the learning resource address the element or unit of competency?
  • What equipment, time and materials will you need to develop your learning resource?
  • Where will you source content for your learning resource?
  • Who can/will you contact for support in developing your resource?
  • How will you review your work as it progresses?
  • Describe the type of learning design that your learning resource uses

So as it stands, I think I’ll be largely sticking to the existing assessment plan with only a few minor changes. (Largely because my predecessor knows her stuff, which has been tremendously helpful). I am still keen to find ways to address as much of this assessment as possible in class activities – being mindful of the fact that learners may not make every class and there needs to be a certain amount of flexibility.

Overall though – and clearly the dates will need to be changed, this is what the assessments look like.

assessment documentThe next step is to update the subject guide and add my amendments to the existing documents.  I do also need to devise a marking guide for the learning resources themselves – there is something appealing in the idea of having the learners create this as one of their draft resources as the unit of competency document does stretch to define learning resources as including assessment resources too. This seems like a great opportunity to get the learners thinking more critically about what makes a good learning resource.