Thoughts on: Project Management in Instructional Design (Allen, 2020)

Reading this thesis was valuable because it showed how stark the difference can be between them. Where Amparo took a very qualitative, narrative driven approach to the research going narrow but deep with 3 people and case studies, Allen goes shallow but broad with this quantitative research into the most valued project management competencies for instructional designers (IDs).

Allen takes a deep dive into a comparatively rich pool of literature relating to the project management skills that best serve IDs, generating a comprehensive literature review that sums up the last decade or so of research in this space very effectively. She builds on it by conducting her own two stage survey of 86 IDs in a range of sectors (Higher Ed, corporate, ID project team leaders) to gather some rich quant data.

Nothing overly surprising emerges in terms of the favoured competencies, with the data largely aligning with the studies that had come before (but at a larger scale) but it did spark a few thoughts for me about my own work. Some of the competencies across the literature I personally found a little nebulous – things like “attention to detail” which are certainly valuable professional attributes but, coming from a competency based education background, I was curious about how that might be meaningfully measured or taught. This brought me back to realising that I need to think carefully about what practices, attributes and competencies mean in the data I am gathering.

The painstaking detail in the writing about the work undertaken, from the lit review to the data collection and analysis offered a useful benchmark for my own future writing.

It was also useful to scan the references and find a few promising leads that I’ve previously missed. These include:

Kenny, J. (2004). A study of educational technology project management in Australian universities. Australasian Journal of Educational Technology, 20(3), 388–404.

Allen, M. (1996). A profile of instructional designers in Australia. Distance Education 17(1), 7–32.

I don’t think I had thought to search for IDs in the Australian literature.

Allen, S. A. (2020). PROJECT MANAGEMENT IN INSTRUCTIONAL DESIGN. Franklin University.

Thoughts on: Three Blended Librarians’ Narratives on Developing Professional Identity (Amparo, 2020)

When I started my PhD, a common piece of advice was to read some other people’s theses to better understand how they work and what might be expected. I glanced at a couple but couldn’t find any that seemed particularly relevant so I moved on to other things. I wish I’d searched a little harder because recently I’ve come across quite a few that have been immensely helpful. In the four that I’ve looked at, I’ve found new theoretical frameworks and ideas and some descriptions of methodology that have helped a few things click into place.

The first of these is from Adonis Amparo, of the University of Southern Florida. While it focuses on a group of people that I’m not covering in my own research (blended librarians), the challenges they face and the work they do aligns nicely with the edvisors that I’m looking at. To paraphrase, Blended Librarians are librarians whose work includes the role of instructional technologists. I take this to equate to educational technologists in the Australian context, based on the description in the dissertation.

Amparo, also a blended librarian, uses a mixture of autoethnography and ethnography in three case studies of himself and two others working in these roles. Additionally, he uses a Narrative Research approach, which makes use of something called “wonderments” instead of conventional research questions to create a little extra space to play.

“Wonderments allow for exploration in research, whereas research questions provide a more limited frame. In narrative research, narrativists design their questions around one or several “wonders” or “wonderments” rather than devise ‘A priori’ research questions (Clandinin, 2016). This allows for “a sense of a search, a ‘re-search’, a searching again,”…”a sense of continual reformulation” (Clandinin & Connelly, 2000, p.124) “

Amparo, 2020

I’m not altogether sure what the difference is or how this works or even if it suits the direction that I’m currently taking but I do like the broad idea of it.

There were a few other ideas that grabbed my attention. Apparently there is this concept of “identity stretch” in Celia Whitchurch’s seminal 2008 work on the Third Space in Higher Ed that I have completely missed until now and Amparo has a nice line when he says

As with any new position, the role must be created from the institutional space provided

Amparo, 2020

Another potentially valuable find was the use of Role Theory. According to Amparo, “Researchers use Role Theory to explain social interactions built on behavioral expectations and social positions defined by these behaviors (Biddle, 1986)” This is a concept dating back to the 1950s in social psychology, meaning that there has been plenty of time for a backlash but Amparo seems to navigate the criticisms of Role Theory well enough to extract some useful insights. Given that my work leans heavily on status and perceptions in institutions that seem tied to roles, I have to wonder whether there is something in here of value to me as well. It may be that my own use of Social Practice theory might knit with some of these ideas. At the very least, it seems to have some potential. The terminology alone, which includes role strain and role ambiguity seems relevant.

The second lens employed by Amparo is Identity and Social Identity Theory, which again is something new to me but which seems to offer some promise in terms of considering how edvisors develop the confidence in their abilities to ‘edvise’ academics.

A final point of interest in this work (spoilers) is that the three Blended Librarians examined all seem to develop or arrive at their professional identities from three relatively different perspectives. One from the objects they create, another from the work relationships they develop and the last from the service they provide to students.

Definitely worth a read if you’re working or researching in this space.

Amparo, A. (2020). Three Blended Librarians’ Narratives on Developing Professional Identities [Ph.D., University of South Florida]. http://search.proquest.com/docview/2470896827/abstract/C115552888794A40PQ/1

Research update #60: Previously on Colin’s PhD

Looking back, it appears that it’s been 6 months since my last confession – um, post.

A few things have happened since then, less tangible progress than I would’ve liked but at the same time I feel that I’ve unraveled a few knots that had been troubling me and I’ve set the stage to get things done in 2021. (Sure, why not tempt fate, what could possibly go wrong with that)

It’s basically impossible at this point to not talk about COVID-19, as much as I’m sick of the sound of that term. It’s had a handful of notable impacts on my research, in that one of my underlying assumptions that people don’t know much about edvisors and what we do has shifted as more academics than ever have been forced to engage with us in the rapid shift to online teaching and learning. This still doesn’t necessarily mean that they have the full or correct understanding of this but overall awareness at least has changed. At the same time, the shift has also had an impact on the way edvisors work with academics at scale, so my ideas about (collaborative) working relationships need to be reframed and reconsidered.

The loss of the international students that underpinned university finances has also had a significant impact on budgets and staffing levels. People in edvisor roles have perhaps been safer than some but I have still a number of friends and colleagues that have borne the brunt of cuts and restructures and there remains a nervous instability in many institutions. Speaking more selfishly, I think this will mean that institutions will likely be less willing to share information about edvisors numbers, roles and unit structures than they might previously have been.

On a more positive note, I was fortunate to add an additional supervisor to my team, Dr Jess Frawley, who also works in an edvisor role and has been invaluable in providing some new insights into this work. One big breakthrough discussion I have had with the whole supervision team has led to me rescoping this project from my initial “boil the ocean” idea of getting insights from edvisors, academics and institutional leaders to the somewhat more manageable and realistic focus just on edvisors. (I can save the latter two for post doc work maybe – but one step at a time).

I also learnt a lot about going through the ethics process last year – doing so another two times for minor changes to my methodology. I suspect that the less detail about the nuts and bolts that is included the better. In my case, I’d initially said that I’d put out a call for survey participants and they would need to contact me before I’d give them a link to the survey. On reflection, I felt sure that this would have significantly reduced engagement so I needed to submit a modification to fix this. Live and learn.

I’m now reading a bunch of highly relevant theses that all seemed to hit Google Scholar within a few days of each other and trying to fight off the urge to radically upend all my plans for something completely new. Stay tuned for a post or two about what I’ve taken from these soon.

I finally also got around to engaging more with my PhD peers in the lab at my school. COVID-19 has probably been a blessing in that regard as it has meant that there has been much more web based activity to support this group that I’ve been able to participate in. This can be a very lonely endeavour and I really do value the conversations I have both with my TELedvisors friends and study peers.

Thoughts on: 2020 vision: What happens next in education technology research in Australia (Thompson & Lodge, 2020)

The latest issue of AJET (Australasian Journal of Educational Technology) opens with an editorial from two people whose work in the space of TEL I’ve found of interest over the years – Kate Thompson (QUT) and Jason Lodge (UQ).

My entry to this editorial was via a local Higher Ed daily newsletter, the Campus Morning Mail. The title of the entry for it was “For on-line to work, ask the ed-tech experts“. Leaving aside the strange hyphenation of online, this headline led me down the page to see exactly who these ‘ed-tech experts’ are. Apparently the only experts are ed tech researchers. (There is a passing reference to education technologists in the abstract but just one).

I tweeted a few first glance responses – looking back I think they were relatively innocuous:

This was enough to spark some wide-ranging discussions. I think the main issue ultimately was my suggestion that researchers often don’t take a wide or holistic enough view of ed tech and the ed tech ecosystem in institutions (as far as practical implementation goes) and that much of this research is relatively abstract and lab based. Maybe this is slightly unfair but, as someone whose job it is to stay current on ed tech and TEL, I stand by this overall but recognise that it may lack the nuance that was intended.

So let me explain what my concerns are and what I mean.

I believe that discussions and decisions around technologies with a pedagogical focus need to address practical questions of how it can actually get done in a contemporary institution in a way that has significance and meaningful impact.

This is often (not always but frequently) where the thought about the intervention ends. We end up with conclusions along the lines of ‘within the confines of the theoretical framework and recognising that further research is necessary, it appears that ePortfolios benefit learning because of x, y and z. More institutions should implement ePortfolios in context a, b and c’. This, to me, is abstract because while it is important to have this understanding, it almost never offers a path towards this imagined implementation. There’s a big gap between “should” and will.

The process of making meaningful change happen at scale in a Higher Education institution can be an arduous one, shaped by many valid and real factors, that seem to be waved away as the domain of uninformed “decision-makers”, “policymakers”, “economists”, “self-promoters” and “aspiring international keynoters”. The lack of regard in this editorial for anyone who is not an educational researcher clangs loudly against the repeated question about why education researchers don’t play a larger part of the decision making process.

As an education technologist, I recognise myself as one of these ‘others’. My colleagues in learning (etc) design and academic development areas I would suggest are the same. We possess significant expertise that comes from the varied pathways we took into this field, as well as from the practical work we do day in and day out relating to supporting teaching and learning in practice across many educators, disciplines and situations. We are frequently the bridge between many parts of the organisation – teaching and non-teaching – which gives us rare insights into the bigger picture. As professional staff however, we tend to be excluded from undertaking research and contributing to the literature.

What I’d love to see are three things:

  • Meaningful, respectful conversations between education researchers and edvisors to foster understanding of each other does and contributes
  • Genuine research collaborations between education researchers and edvisors
  • Greater use of relevant, evidence based research in institutional operations.

I bump into the frustrations of people in institutions about the pace of change or progress on implementations on a daily basis. I know how easy it can be to attribute these to personal motives rather than deal with the reality of complex systems – I’ve done it myself in the past, to my embarrassment now. The best way forward in my view is with more mutual understanding and respect.

Finding common ground, a small Rant

man showing distress
Photo by Andrea Piacquadio on Pexels.com

Those who know me will know that the edvisor community is a big deal for me. (If you don’t, I mean, collectively, education technologists, learning designers, academic developers and people in those kinds of Third Space roles) .

We face a number of challenges on a daily basis in being heard and having our experience and expertise recognised by those people that we try to help to do teaching and learning better. I caught up with a number of colleagues for a semi-informal chat recently about ways that we might collaborate more effectively in terms of the resources and training that we provide in our different faculties and centrally.

I’d like to make clear that individually I like and respect the people that were in the conversation. It was a combination of learning/education designers (instructional designers, whatever – insert your preferred term here) and education technologists. Mostly learning designers though. And that’s where the fun started.

Now these are some of my theories about how universities work and their problems. They are a bit untested and hopefully some of that will come out of my PhD research. I don’t actually think they are particularly controversial. Essentially there is a prestige hierarchy of knowledge in higher ed: Discipline > Pedagogy > Technology. People may downplay this but at times there can be a deep seated belief amongst learning/education designers that people on the technology side only ever talk about which buttons to push. This can occasionally come across as an attitude that unless you are a real education/learning designer, your pedagogical understanding is minimal. And if an academic should happen to come to you with a technological question rather than a purely pedagogical one, they might as well have defiled the graves of your ancestors*.

Let me divert for a moment to my primitive understanding of practice theory, where a practice is composed of three elements – the material (the things you need to do the practice), competencies (the knowledge you need) and cultural (the social context in which it occurs). These may not be the official terms but lets roll with the broad concept because that is more important right now. I would argue that if you don’t have an understanding of all three, you probably don’t know enough about the practice to advise others about it well.

My second theory about higher ed is that many academics feel that they are expected to have pedagogical expertise (alongside their discipline knowledge) because they are working in a role where they are expected (usually) to be able to teach. One of our challenges then as edvisors is that we, as people who are not working in teaching roles, are not seen as people to go to for pedagogical advice. (Also, asking for pedagogical advice is to admit to a lack of knowledge and higher ed is a place where your knowledge is your power and your currency). This does vary between disciplines, depending on how confident people feel in their identity as a discipline expert. (Medical educators seem to be more open than many academics to receiving advice about pedagogy). This isn’t a universal rule and some academics are perfectly comfortable trying to develop themselves as educators but, anecdotally at least, the many academics engaging in pedagogically oriented professional development will do so mostly because it is a mandated part of promotion or career progression.

Asking for technological advice however is easier because nobody will judge you for that. My personal experience is that academics are more open and honest about their skill gaps in these kinds of workshops, even their pedagogical gaps, because expectations of them are lower. Maybe this is just my approach but as an educational technologist, I see an opportunity then to bundle pedagogical thinking with discussion of the technology. They are all part of the one practice, after all.

What works for me doesn’t work for everyone, of course and might not even be the right solution. (Assuming there is only one right answer to the question of how edvisors can lead educators to the water of better learning and teaching and get them to drink).

I mentioned the word ‘training’ earlier. In our wide-ranging discussion about how we (education technologists and learning designers) collectively educate educators, I referred to this as training. One of the learning designers leapt upon this to point out that the work that I do is basically a behaviorist, push-this-button push-that-button pedagogy-free zone whereas their ‘workshops’ are richer. Rather than focus on the idea, they fixated on the semantics, the specific presentation of a form of the idea. (I have a separate post coming about form vs content). When I pointed out that I felt there was a certain amount of snobbishness in the way technology vs pedagogy was seen and discussed in our work, there was a defensive bustle of ‘no, we love technology’ but I don’t think I got my point across.

I do also recognise that sometimes we have emotional reactions alongside rational ones. Both are a part of life but it can take a bit of sifting to know whether you are in the right. Then again, being factually right isn’t always the only thing that matters. As a community of practitioners who struggle to be heard and recognised, it’s important that we can also hear and recognise our colleagues in the different roles of our discipline. Feeling disrespected I believe underpins many of the dumb, unproductive tensions and simmering conflicts in our environment.

Ultimately, I would say that collectively our job is to improve learning and teaching, by whatever means necessary. Putting ourselves into tiny silos and refusing to engage with an educator when they come to us with a question because ‘that’s not my job’ is bad practice, IMHO. If you legitimately can’t answer the question, sure, help them by directing them to someone that can but don’t miss the opportunity to build a relationship of trust with someone because you feel that they didn’t respect your primary focus. Also, for the love of God, let’s not set up an ‘us and them’ culture between pedagogists and technologists – that doesn’t help anybody.

Anyway, maybe we need to start by considering what our common ground is and working our way out from there. Remembering that we are all messy and complex and see a range of paths to the promised land is probably a good first step.

Thank you for indulging in my therapy session.

* I do want to acknowledge that I think it is more of a philosophical approach than anything else. There can be valid reasons, it’s just not my personal style.

Thoughts on: Five misunderstandings about case-study research (Flyvbjerg, 2006)

One of the things that I’ve noticed as I explore the scholarly world is that there appear to be as many different ways to do research as there are researchers. Every time I’ve discussed my research with someone, they seem to have had a different take on the best way to do it. This, I guess, comes down to their experiences and how they would do it if it was their project, based on their way of seeing the world and the knowledge within. It shouldn’t surprise me then that, as people make these approaches and paradigms part of their identity, that they can get strangely passionate and maybe even political about ‘the right way to do things’. (Not everyone, mind you, but more than a few).

Which brings us to Flyvbjerg and this take on the value of case studies in qualitative research. Rather than simply talking through the nature and merits of the case study as a way of understanding something, the author positions it in opposition to common criticisms of this form of research. Kind of mythbusters for qualitative research I guess.

To be frank, I’m still getting my head around what research is, so rather than follow him down this rabbit-hole in depth, I’m just going to share the parts of this that stood out the most and that got me thinking about what I want to do. A significant part of the thrust of the paper seems to lie in whether we can be confident that a case study tells us something meaningful about the world. He comes back several times to a larger philosophical tension between case studies and larger scale quantitative research that seeks to prove a hypothesis or demonstrate the existence of things that in combination add up to something meaningful.

In addition, from both an understanding-oriented and an action-oriented perspective, it is often more important to clarify the deeper causes behind a given problem and its consequences than to describe the symptoms of the problem and how frequently they occur. Random samples emphasizing representativeness will seldom be able to produce this kind of insight; it is more appropriate to select some few cases chosen for their validity. (p.229)

For me, the main points of contention are: Is this simply a one-off outlier that you are describing or is this a situation that is likely to be seen repeatedly? (Generalisability) What does the fact that the research chose this particular case to study mean in terms of its independence or representativeness? (Verification bias) Is it possible to extract meaningful truths from this story? (Ability to summarise findings).

Generalisability

Flyvbjerg contends that looking at one case can indeed tell us a lot. The idea of falsification is, in essence, that it only takes one example that contradicts a stated belief to change that idea.

The case study is ideal for generalizing using the type of test that Karl Popper(1959) called “falsification,” which in social science forms part of critical reflexivity. Falsification is one of the most rigorous tests to which a scientific proposition can be subjected: If just one observation does not fit with the proposition, it is considered not valid generally and must therefore be either revised or rejected. Popper himself used the now famous example “all swans are white” and proposed that just one observation of a single black swan would falsify this proposition and in this way have general significance and stimulate further investigations and theory building. The case study is well suited for identifying “black swans” because of its in-depth approach: What appears to be “white” often turns out on closer examination to be “black.” (p.227-228)

Verification bias

In some ways, the other side of this is what we learn when the things that we didn’t expect to happen, do. Flyvbjerg seems to feel that this is a fairly compelling counter to the idea that researchers conducting case studies choose the cases that are most likely to match their hypotheses, noting that we learn much more when the unexpected occurs.

A model example of a “least likely” case is Robert Michels’s (1962) classical study of oligarchy in organizations. By choosing a horizontally struc-tured grassroots organization with strong democratic ideals—that is, a type of organization with an especially low probability of being oligarchical—Michels could test the universality of the oligarchy thesis; that is, “If this organization is oligarchic, so are most others.” A corresponding model example of a “most likely” case is W. F. Whyte’s (1943) study of a Boston slum neighborhood, which according to existing theory, should have exhibited social disorganization but in fact, showed quite the opposite (p.231)

Summarising findings

Life is complex and not everything can necessarily be boiled down to basic truths. Flyvbjerg largely rejects the position that this is a weakness of case studies, instead valuing ambiguity

The goal is not to make the case study be all things to all people. The goal is to allow the study to be different things to different people. I try to achieve this by describing the case with so many facets—like life itself—that different readers may be attracted,or repelled, by different things in the case. Readers are not pointed down anyone theoretical path or given the impression that truth might lie at the end of such a path. Readers will have to discover their own path and truth inside thecase. Thus, in addition to the interpretations of case actors and case narrators,readers are invited to decide the meaning of the case and to interrogate actors ’and narrators’ interpretations to answer that categorical question of any case study, “What is this case a case of?” (p.238)

I’m not sure that this level of ambiguity sits comfortably with me but I can see value in the case study as a whole. In terms of my own work, there’s a final additional quote that I like that speaks to the idea of research undertaken by practitioners – something I have noticed as somewhat of a gap when it comes to research about edvisors.

Here, too, this difference between large samples and single cases can be understood in terms of the phenomenology for human learning discussed above. If one, thus, assumes that the goal of the researcher’s work is to under-stand and learn about the phenomena being studied, then research is simply a form of learning. If one assumes that research, like other learning processes,can be described by the phenomenology for human learning, it then becomes clear that the most advanced form of understanding is achieved when researchers place themselves within the context being studied. Only in this way can researchers understand the viewpoints and the behavior, which characterizes social actors. Relevant to this point, Giddens (1982) stated that valid descriptions of social activities presume that researchers possess those skills necessary to participate in the activities described:

“I have accepted that it is right to say that the condition of generating descriptions of social activity is being able in principle to participate in it. It involves“mutual knowledge,” shared by observer and participants whose action constitutes and reconstitutes the social world.” (Giddens, 1982, p. 15)

(P.236)

[[zotpress items=”{2977232:YBBUAHRS},{2977232:EUTC489F}”

Research update #59: I’m back – what did I miss?

Photo by Andrea Piacquadio from Pexels

I took a little time off – as it appears many of my fellow candidates are – due to the plague and the impact it is having on, well, everything. Work in the online education space has been frantic and it seemed like a good time not to try to do too much.

One thing that I’m very conscious of now is the fact that the role and value (at least hopefully perceptions of value) of edvisors has changed now. I know this will impact what I’m looking at but it’s not really clear yet how. Academics are absolutely far more aware that we exist and largely seem to be appreciative of this fact. What does this mean for my main research question?

What strategies are used in HE to promote understanding of the roles and demonstrate the value of edvisors among academic staff and more broadly within the institution?

To be honest, I’ve been thinking for a while now that this isn’t the right question anyway. It doesn’t explain why I’m doing this research (the problem) and it moves straight into looking for a narrow set of solutions for an assumed problem. This problem being that academics and management don’t know what edvisors do or what they contribute. It also assumes that edvisors and edvisor units have the time, energy, skill or political capital to develop and implement formal strategies to address this.

The heart of the issue is really, to put it plainly, why don’t people respect our skills, experience and knowledge and take our advice seriously? Which seems possibly a bit pointed or needy as a research question but that’s not hard to tweak. So this is something that I’m thinking seriously about at the moment.

Something else is the fact that I’ve never been entirely happy with my methodology. Unfortunately, as someone who hasn’t done a lot of research before – at least at this scale – I’m dealing with a lot of unknown unknowns. How much data do you need for a good thesis? People have said to me recently that the best PhD is a done one, so maybe the question is just how much data do you need for a thesis – but I feel like if I’m putting in the time, it needs to be good.

Generally my approach when faced with a big project is to gather up everything that seems to have some value and throw it at the wall to see what sticks. Then it is just a gradual process of filtering and refining. The problem is that the scope of “everything” has expanded to cover edvisors across three roles, academics and leaders in potentially 40+ universities around Australia, as well as policy documents, job ads and position descriptions, organisational structures and whatever else crops up along the way. Given my ties to the TELedvisors community, I’d hope that this group will also play a substantial part of what I’m doing.

But maybe this can be done more cleverly.

Could there be enough material just in the edvisor community? Even in the TELedvisor community? (486 members and counting). I’d long felt that case studies were an interesting way to tell a story but lack something authoritative. But I’ve been reading Five misunderstandings about case study research by Flyvbjerg (2006) and I’m starting to see the possibilities. (I think I’ll do a separate post about this)

If the world’s going to change, I might as well join in.

Flyvbjerg, B. (2006). Five Misunderstandings About Case-Study Research. Qualitative Inquiry, 12(2), 219–245. https://doi.org/10.1177/1077800405284363

Research update #58: I’m ethical

There’s a lot about the PhD experience that I find quite daunting – I don’t think I’m alone in this – but the administration side seems particularly nerve-wracking. Getting accepted, getting my thesis proposal accepted, getting ethics clearance: they all speak directly to imposter syndrome. Most of the time this just burbles away happily in a small dark corner in the back of my mind, tempered largely by knowing that this is just part of scholarly culture and it is pretty much universal. Hearing the many many stories of others in my position through online communities and blogs like The Thesis Whisperer has been hugely helpful in understanding that this is just part of the process.

All the same, having to pass these institutional hurdles for the first time still brings it to the fore. This is when the faint nagging doubting comes into the light because there will be proof, one way or the other, that I belong or I don’t.

Happily, I do. (At least for now)

HREC came back to me on Thursday to say that they are happy with the extra information that I provided in response to their questions and it is time to move forward.

Given the current flurry of activity in universities in responding to the challenges of the COVID19 Coronavirus, I have a feeling that this may not be the optimal time for me to be asking for the time of people in roles like mine. It’s a fascinating time to be supporting learning and teaching in Higher Education in Australia, particularly given how many of our students now come from (and are still stuck in) China. Being on-the ground in institutions that are sometimes seen as slow movers when it comes to learning and teaching change and seeing how they take rapid and decisive action at scale in seriously embracing TEL is pretty exciting. There will be a lot to say and learn when the dust finally settles – whenever that might be.

Suffice to say, I’m mentally factoring in longer than normal response times for surveys and interviews. (Not that I know what the norms are anyway, but you know). At least hopefully contact uni HR teams for more generic data about numbers and titles should be less dramatic.

I can’t remember what I’ve mentioned before about my methodology but in this first phase I’ll be seeking to survey and interview edvisors about a range things relating to professional identity and perceptions of their place and value in institutions. I know essentially nothing about what to do in terms of wrangling this data and turning it into a story – I know there will be coding and Nvivo involved for the qualitative responses – but I’m looking forward to learning it.

It also occurred to me the other day that there is a great deal to be learned about what people think edvisors are and do from successful and failed job applications. The ethics around accessing the latter in particular seems like a massive swamp but it’s something I’ll think about for later.

Image by Peggy und Marco Lachmann-Anke from Pixabay

Research update #57: Curly questions in ethics

I heard back about my ethics application a few weeks back – it’s mostly fine but there is a big question that I need to respond to before I can go ahead. It’s essentially to do with whether the institution or individuals in the institution are the real participants.

I want to work with key informants in edvisor roles in most (ideally all) of the universities in Australia to learn about their perceptions and experiences in these roles. That’s the easy bit. I also want to gather some rich empirical data about the numbers of peoples in these roles, both in central and faculty – and other? – teams, and how these teams are structured. That’s the hard part.

The ethics committee wants to know what I am going to do in terms of getting permission from the institution to collect this data. In hindsight, this is clearly something I should have given more thought to in the research design. While to me, this data doesn’t seem particularly sensitive, there’s all manner of university politics and other sensitivities surrounding this, apparently.

My feeling is that for this data to be truly meaningful, it needs to reflect all the universities. Otherwise it is just an average or an estimate. (Which is what most of the existing research I’ve found provides.) So what happens if some institutions don’t want to share? (I don’t really expect that to be the case but people being people, who can say?)

The logistics of obtaining permission is another challenge. Am I looking at one person in the institution (maybe like a DVCA – but really I have no idea) or do I need to clear this with them and leaders in each individual faculty? Assuming 6 faculties per institution on average, 280 people? Clearly this isn’t practical.

A few things I’m going to follow up that will hopefully shed light on this. The Council of Australasian Leaders of Learning and Teaching (CAULLT) recently released a very useful environmental scan of professional learning in HE that captured exactly some of this data – though only in central teams from what I can tell. Hopefully the report’s author Kym Fraser can offer some advice on what they did in terms of permissions.

There are also some statutory reporting requirements that HE institutions in Australia have relating to reporting on staffing numbers to the government that might also demonstrate that permission isn’t needed. From what I’ve seen so far, this data doesn’t go into the level of detail that I need though and probably doesn’t go into organisational structures either. Most unis have Business Intelligence units that manage this kind of data – moreso for internal use – I’m also going to chat to them. I don’t think they will be able to make a call on permission but they may have a better idea where to go next.

Another significant question that the Ethics committee has thrown up is whether universities will have issues with their staff working as a key informant for a few hours to do work that is outside that person’s ordinary duties. I really have no answer to this – though I kind of wonder if this question would have been asked if it was academic staff that I was planning to work with. (I probably won’t say that in my response.) It does bring me back to the seeking permission question/dilemma.

Have you had any experience with these kinds of questions? Got any tips?

Research update #56: Tying theory to methodology

While I’m waiting for faculty approval to submit to university ethics, I have time to consider some of my bigger questions sitting in the ‘later pile’. A big one relates to how (if?) my theoretical framework relates to my methodology in a meaningful way. There are a couple of theories that I’m drawing on for this research, though to be honest I’m not sure how officially ‘theoretical’ they are.

There’s work by Whitchurch and others about the Third Space as it relates to Higher Education, the liminal space between admin and academic that edvisors occupy. And there’s work relating to Social Practice theory by Shove and others that I feel may be helpful in defining the different kinds of edvisors by the work edvisors they do. It may also reveal something about how we/they work with academics and management in terms of the ways practices are disseminated and evolve. This seems to crossover into the realms of change management, which I seem to be hearing a lot about recently in this space and which perhaps seems like a useful angle to take, strategically. (Truth be told though, I think that too much weight is probably given to change and not enough to maintenance and sustainability of existing good learning and teaching practices, so who knows where I’ll land on that)

There are a couple of concerns that I have – are the theories that I’m looking at robust enough to inform the research that I’m doing? Are they even really theories, as such? Shouldn’t they be providing me with some ideas about how I should be designing my research data collection? To date, I’ve been largely assuming that they will come to the fore when I eventually get onto data analysis and trying to make some meaning from the things I’ve collected.

Nobody seems to be jumping up and down about this though – which has become my default indicator of whether I’m going horribly wrong – so I guess I’ll just keep meandering along. I have reached out to a couple of academics in business faculties now though, with an interest in the way organisations work because I have a strong feeling that this is an important factor in successful edvisor/academic/management collaboration but I have no idea what the language is that I need to describe this or what models or frameworks will best help to understand it. I’ve mentioned before that one of the things I like about doing this PhD study is the opportunities that it creates to reach out to people who have done interesting work, who, for the most part seem willing to share their expertise.

It draws into sharp contrast a comment yesterday from one of the academics on my progress review panel. I asked whether my blogging here, as a way of getting my ideas straight, might prove problematic down the road with my thesis – i.e. are there risks of being pinged for self plagiarism or something? I’m pretty sure that my writing style here is far more casual than my academic writing style but we do also have go-to turns of phrase and words that we favour. (I know I really overuse ‘particularly’, ‘however’, ‘interesting’, and a few others but I struggle to find replacements that feel as much like me). Anyway, the academic seemed just as concerned about people stealing my ideas. Which I guess it was nice that someone thinks I might have ideas worth stealing but, given that my entire aim with this research (as far as I know currently) is to improve and change practices and relationships with edvisors, I’m mostly of the opinion that I want my ideas to circulate and evolve. But maybe I’m naive.

Anyway, more things to think about.