SOCRMx Week #8: The End

Well I probably said all that I needed to say on my general feelings about this MOOC in my last post so this is largely for the sake of completion. The final week of this course is a peer assessed piece of writing analysing the methods used in a sample paper. Turns out that I missed the deadline to write that – I may even have been working on my Week 7 post when that deadline fell – so this appears to be the end of the road for me. I could still go through and do the work but I found the supplied paper unrelated to my research and using methodologies that I have little interest in. The overall questions raised and things to be mindful of in the assessment instructions are enough.

  • What method of analysis was used?
  • How was the chosen method of analysis appropriate to the data?
  • What other kinds of analysis might have been used?
  • How was the analysed designed? Is the design clearly described? What were its strengths and weaknesses?
  • What kind of issues or problems might one identify with the analysis?
  • What are the key findings and conclusions, and how are they justified through the chosen analysis techniques?

And so with that, I guess I’m done with SOCRMx. In spite of my disengagement with the community, the resources and the structure really have been of a high standard and, more importantly, incredibly timely for me. As someone returning to study after some time who has not ever really had a formal research focus, there seems to be a lot of assumed knowledge about research methodology and having this opportunity to get a birds-eye view of the various options was ideal. I know I still have a long way to go but this has been a nice push in the right direction.

 

SOCRMx Week #7: Qualitative analysis

I’m nearly at the end of Week #8 in the Social Research Methods MOOC and while I’m still finding it informative, I’ve kind of stopped caring. The lack of community and particularly of engagement from the teachers has really sucked the joy out of this one for me. If the content wasn’t highly relevant, I’d have left long ago. And I’ll admit, I haven’t been posting the wonderfully detailed and thoughtful kind of posts on the forum or in the assigned work that they other 5 or so active participants have been doing but I’ve been contributing in a way that supports my own learning. I suspect the issue is that this is being run as a formal unit in a degree program and I’m not one of those students. Maybe it’s that I chose not to fork over the money for a verified certificate. Either way, it’s been an unwelcoming experience overall. When I compare it to the MITx MOOC I did a couple of years ago on Implementing Education Technology, it’s chalk and cheese. Maybe it’s a question of having a critical mass of active participants, who knows. But as I say, at least the content has been exactly what I’ve needed at this juncture of my journey in learning to be a researcher.

This week the focus was on Qualitative Analysis, which is where I suspect I’ll being spending a good amount of my time in the future. One of my interesting realisations early on in this though was that I’ve already tried to ‘cross the streams’ of qual and quant analysis this year when I had my first attempt at conducting a thematic analysis of job ads for edvisors. I was trying to identify specific practices and tie them to particular job titles in an attempt to clarify what these roles were largely seen to be doing. So there was coding because clearly not every ad was going to say research, some might say ‘stay abreast of current and emerging trends’ and other might ask the edvisor to ‘evaluate current platforms’. Whether or not that sat in “research” perfectly is a matter for discussion but I guess that’s a plus of the fuzzy nature of qualitative data, where data is more free to be about the vibe.

But then I somehow ended up applying numbers to the practices as they sat in the job ad more holistically, in an attempt to place them on a spectrum between pedagogical (1) and technological (10). Which kind of worked in that it gave me some richer data that I could use to plot the roles on a scattergraph but I wouldn’t be confident that this methodology would stand up to great scrutiny yet. Now maybe just because I was using numbers it doesn’t mean that it was quantitative but it still feels like some kind of weird fusion of the two. And I’m sure that I’ll find any number of examples of this in practice but I haven’t seen much of this so far. I guess it was mainly nice to be able to put a name to what I’d done. To be honest, as I was initially doing it, I assumed that there was probably a name for what I was doing and appropriate academic language surrounding it, I just didn’t happen to know what that was.

I mentioned earlier that qualitative analysis can be somewhat ‘fuzzier’ than quantitative and there was a significant chunk of discussion at the beginning of this week’s resources about that. Overall I got the feeling that there was a degree of defensiveness, with the main issue being that the language and ideas used in quantitative research are far more positivist in nature – epistemologically speaking (I totally just added that because I like that I know this now) – and are perhaps easier to justify and use to validate the data. You get cold hard figures and if you did this the right way, someone else should be able to do exactly the same thing.

An attempt to map some of those quantitative qualities to the qualitative domain was somewhat poo-pooed because it was seen as missing the added nuance present in qualitative research or something – it was a little unclear really but I guess I’ll need to learn to at least talk the talk. It partly felt like tribalism or a turf war but I’m sure that there’s more to it than that.  I guess it’s grounded in a fairly profoundly different way of seeing the world and particularly of seeing ‘knowing’. On the one side we have a pretty straight forward set of questions dealing with objective measurable reality and on the other we have people digging into perspectives and perceptions of that reality and questioning whether we can ever know or say if any of them are absolutely right.

Long story short, there’s probably much more contextualisation/framing involved in the way you analyse qual data and how you share the story that you think it tells. Your own perceptions and how they may have shaped this story also play a far more substantial part. The processes that you undertook – including member checking, asking your subject to evaluate your analysis of their interview/etc to ensure that your take reflects theirs – also play a significant role in making your work defensible.

The section on coding seemed particular relevant so I’ll quote that directly:

Codes, in qualitative data analysis, are tags that are applied to sections of data. Often done using qualitative data analysis software such as Nvivo or Dedoose.

Codes can overlap, and a section of an interview transcript (for example) can be labeled with more than one code. A code is usually a keyword or words that represent the content of the section in some way: a concept, an emotion, a type of language use (like a metaphor), a theme.

Coding is always, inevitably, an interpretive process, and the researcher has to decide what is relevant, what constitutes a theme and how it connects to relevant ideas or theories, and discuss their implications.

Here’s an example provided by Jen Ross, of a list of codes for a project of hers about online reflective practice in higher education. These codes all relate to the idea of reflection as “discipline” – a core idea in the research:

  • academic discourse
  • developing boundaries
  • ensuring standards
  • flexibility
  • habit
  • how professionals practice
  • institutional factors
  • self assessment

Jen says: These codes, like many in qualitative projects, emerged and were refined during the process of reading the data closely. However, as the codes emerged, I also used the theoretical concepts I was working with to organise and categorise them. The overall theme of “discipline”, therefore, came from a combination of the data and the theory.

https://courses.edx.org/courses/course-v1:EdinburghX+SOCRMx+3T2017/courseware/f41baffef9c14ff488165814baeffdbb/23bec3f689e24100964f23aa3ca6ee03/?child=last

I already mentioned that I undertake thematic analysis of a range of job ads, which could be considered to be “across case” coding. This is in comparison to “within-case” coding, where one undertakes narrative analysis by digging down into one particular resource or story. This involves “tagging each part of the narrative to show how it unfolds, or coding certain kinds of language use” while thematic analysis is about coding common elements that emerge while looking at many things. In the practical exercise – I didn’t do it because time is getting away from me but I read the blog posts of those who did – a repeated observation was that in this thematic analysis, they would often create/discover a new code half way through and then have to go back to the start to see if and where that appear in the preceding resources.

On a side note, the practical activity did look quite interesting, it involved looking over a collection of hypothetical future reflections from school leavers in the UK in the late 1970s. They were asked to write a brief story from the perspective of them 40 years in the future, on the cusp of retirement, describing the life they had lived. Purely as a snapshot into the past, it is really worth a look for a revealing exploration of how some people saw life and success back in the day.Most of the stories are only a paragraph or two.

https://discover.ukdataservice.ac.uk/QualiBank/?f=CollectionTitle_School%20Leavers%20Study

And once again, there were a bunch of useful looking resources for further reading about qualitative analysis

  • Baptiste, I. (2001). Qualitative Data Analysis: Common Phases, Strategic Differences. Forum: Qualitative Social Research, 2/3. http://www.qualitative-research.net/index.php/fqs/article/view/917/2002
  • Markham, A. (2017). Reflexivity for interpretive researchers http://annettemarkham.com/2017/02/reflexivity-for-interpretive-researchers/
  • ModU (2016). How to Know You Are Coding Correctly: Qualitative Research Methods. Duke University’s Social Science Research Unit. https://www.youtube.com/watch?v=iL7Ww5kpnIM
  • Riessman, C.K. (2008). ‘Thematic Analysis’ [Chapter 3 preview] in Narrative Methods for the Human Sciences. SAGE Publishing https://uk.sagepub.com/en-gb/eur/narrative-methods-for-the-human-sciences/book226139#preview Sage Research Methods Database
  • Sandelowski, M. and Barroso, J. (2002). Reading Qualitative Studies. International Journal of Qualitative Methods, 1/1. https://journals.library.ualberta.ca/ijqm/index.php/IJQM/article/view/4615
  • Samsi, K. (2012). Critical appraisal of qualitative research. Kings College London. https://www.kcl.ac.uk/sspp/policy-institute/scwru/pubs/2012/conf/samsi26jul12.pdf
  • Taylor, C and Gibbs, G R (2010) How and what to code. Online QDA Web Site, http://onlineqda.hud.ac.uk/Intro_QDA/how_what_to_code.php
  • Trochim, W. (2006). Qualitative Validity. https://www.socialresearchmethods.net/kb/qualval.php

Research update #36: Playing well with others

The nature of my research topic, with a focus on the status of professional staff in an academic world, feels risky at times. While I know that academic staff occupy edvisor roles as well, I have a feeling that I’ll be digging into sensitive areas around the academic/professional divide that often seem to be swept under the carpet because they raise uncomfortable questions about privilege and class in the academy and some entrenched beliefs about what makes academics special. It would be incredibly presumptuous for me to think that my ideas are all necessarily right and the point of research is to put them to the test and see where they take me but there’s a fair chance that some of what I’m going to have to say won’t always be well received by some of the people that I work with and who pay me. The other big issue is whether if my findings demonstrate a blind spot to professional staff in academics, those same academics responsible for assessing my research will see the value in my work.

Fortunately at this stage I don’t have my heart set on a career as an academic – I really do like doing what I do – but it seems imprudent to prematurely cut one’s options. I am conscious that I need to be more researcherly or scholarly in the language that I use in this space. I sent out a slightly provocative tweet yesterday, prompted by a separate (joke) tweet that I saw which said that the fastest way to assemble a bibliography was to publicly bemoan the lack of research in topic x. 

After 36 hours I’ve had no literature recommended but a university Pro Vice-Chancellor replied suggesting a collaboration on this area of mutual interest. Which surprised and flattered me greatly, considering that I was concerned that I’d come across as a little bolshie in my questions. Maybe it’s wrong of me to see academics as some kind of monolithic whole.

Maybe the trick is to just worry less and be honest. You can’t please everyone and if you can stand behind your work, maybe that’s enough.

I’m not sure. We seem to live in incredibly sensitive times.

 

 

Week #6 SOCRMx – Quantitative analysis

This section of the SOCRMx MOOC offers a fair introduction to statistics and the analysis of quantitative date. At least, enough to get a grasp on what is needed to get meaningful data and what it looks like when statistics are misused or misrepresented. (This bit in particular should be a core unit in the mandatory media and information literacy training that everyone has to take in my imaginary ideal world)

The more I think about my research, the more likely I think it is to be primarily qualitative but I can still see the value in proper methodology for processing the quant data that will help to contextualise the rest. I took some scattered notes that I’ll leave here to refer back to down the road.

Good books to consider – Charles Wheelan: Naked Statistics: Stripping the dread from data (2014) & Daniel Levitin: A Field Guide to Lies and Statistics: A Neuroscientist on How to Make Sense of a Complex World (2016)

Mean / Median / Mode

Mean – straightforward average.

Median – put all the results in a line and choose the one in the middle. (Better for average incomes as high-earners distort the figures)

Mode – which section has the most hits in it

Student’s T-Test – a method for interpreting what can be extrapolated from a small sample of data. It is the primary way to understand the likely error of an estimate depending on your sample size

It is the source of the concept of “statistical significance.”

A P-value is a probability. It is a measure of summarizing the incompatibility between a particular set of data and a proposed model for the data (the null hypothesis). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5366529/

“a significance level is an indication of the probability of an observed result occurring by chance under the null hypothesis; so the more you repeat an experiment, the higher the probability you will see a statistically significant result.”

Overall this entire domain is one where I think I’m only really going to appreciate the core concepts when I have a specific need for it. The idea of a distribution curve where the mean of all data points represents the high point and standard deviations (determined by a formula) show us the majority of the other data points seems potentially useful but, again, until I can practically apply it to a problem, just tantalisingly beyond my grasp.

Thoughts on: Agency and stewardship in academic development: the problem of speaking truth to power (Peseta, 2014)

In some ways this is a ‘thoughts on thoughts on’ as I’m writing about Tai Peseta’s summary reflection at the end of a special issue of the International Journal of Academic Development focusing on the politics of academic development. Specifically, it asked writers to respond to this theme:

amid the array of contested and politically difficult agendas, how do academic developers enact and imagine a future for themselves (and the profession) in ways that recognise and take seriously the business of their own political power, and in particular, their responsibility to speak truth to power (p.65)

I’ve been going to IJAD a lot in my reading because of those that I consider to be the three main edvisor roles – academic developer, education designer and learning technologist – it is academic developers that appear to dominate the research space. Which does make me wonder whether it is a role that is more dominated by people in academic (rather than professional) positions than the other two. Something I’ll be keeping an eye on.

The more time I spend looking at this particular role-type, the more I’m seeing the terms academic and educational developer used interchangeably, which doesn’t help my current line of thinking about education designers/developers primarily as people working with academics to design and built learning resources and online course sites. However it does fortunately still work with my other ideas that titles in the edvisor domain are all over the shop. 

Anyway, much of this is by the by. Peseta elegantly ties together the core ideas of five papers about academic developer practice across Europe, Canada and Australia into a wider discussion about how much power or influence ADs can or should exert in their institutions. The broad tone is that this power is far more than I have personally seen but she does note that there can often be a tendency in these kinds of papers to be slightly celebratory and overstate things. 

A second reading however is that while the collaboration portrayed in this account contains all the hallmarks of a cautious victory narrative, there remains an underlying question about the possible kinds of representation of academic development initiatives. In reflecting on our modes of justification, I find myself asking who is offering this story? How is the discursive field organised to enable this particular account of it?My goal is not to be cynical but rather to open up the spaces and meanings that illustrate the spectacle of academic development’s political power (p.67)

This mention of cynicism in particular brings me to what I found to be one of the most interesting parts of the author’s reflection. I must confess that in working in an environment where cynicism seemingly abounds, it is easy to travel down the same path. When mystifying decisions are handed down from on high with minimal or laughable consultation, information is fearfully hoarded by people that lack the capacity to use it well and there is a generally pervasive belief that most people don’t care about teaching and learning (vs research), it can seem like a natural progression to simply go with the cynical flow. Fortunately my job leads me more often than not to those people who do care about education and who are capable, so this at least tempers those inclinations.

It was revealing to see today in the results of the National Tertiary Education Union survey of 13500 university workers that only 27% expressed confidence in the people who run their various institutions. Sadly, clearly cynicism is the dominant culture. When we get to this state, I suspect that our ability to understand and empathise with the people that we work with and the cycle only worsens. Peseta discusses the Polish study in this issue where educational reform leaders described three institutional responses to change and characterised academics variously as:

…traditionalists, individualists, unaware, in pain, irrational, lazy, or inert. Each of these three logics permeates the policies of academic development in different ways with different reasons and leads to any number of reactions about the merits of institutional initiatives: pernicious, naive, neutral, welcome, celebratory and necessary. What is to be (or has been) our response to the contradictory reactions about our work as academic developers? What conceptual tools are at our disposal to understand the origins of these perceptions and to see arguments about them as a necessary part of an academic developer’s political repertoire. (p.67-68) 

 

There are some big ideas to unpack in this. The educational reform leaders in this study may well be right in their summary of many of the academics that they have tried to work with but they may equally have misunderstood what has led to these behaviours. They may be grossly oversimplifying the nature of their academics, which is a human thing to do when we find ourselves in opposition to someone who doesn’t share our vision. Their rejection of this vision then calls our own abilities into question and so rather than interrogate those, it’s far more comforting to attribute resistance to lesser personal qualities. (Which isn’t to say that they can’t be present as well, just to complicate matters).

At the heart of these issues (for ADs) I would suggest is the triangular relationship between institutional management, academics and academic developers. ADs are routinely forced into a position where they are tasked with effectively driving compliance to institutional policies and initiatives by offering training in ‘doing things the new/right way’ or trying to advocate best practices to the powers that be. This, to me, seems to be the issue of where and whether ADs should assert their political power. When things take the former route

Too heavy an emphasis on compliance without critical engagement leads to dull, bureaucratic box-ticking , and effectively hollows out academic development of its intellectual contribution. Similarly, accepting and lamenting resistance without considered debate or challenges entrenches tradition unthinkingly. Although both positions are productive and necessary for academic development to flourish as a critical encounter, they each contain an uneasy energy characteristic of Di Napoli’s (2014) agonistic spaces. Yet is in in precisely these spaces tha academic developers realise and grasp the power they have to form and practise their judgement, developing a feel for the game and what it means to be in it. In these spaces, the question which usually lurks is ‘what do I do with the power and influence I have?’  (p.66)

This is also perhaps where Peseta and I diverge a little – and I’ll readily accept that my experience in Higher Ed is limited to one institution – but, as a professional staff member, I’ve never had a feeling of any political power. This may simply be a reflection of my particular context or my lack of experience in politicking and the fact that the author and most of the authors of the papers in the special issue do feel that they have some degree of power has to make me wonder if ‘it’s not you, it’s me’. So this in itself has been something of a breakthrough in some ways and is giving me a lot to consider.

The author and the authors of the papers in the special issue spell out a number of strategic approaches to developing and exercising their power that are worth exploring. Many of them seem highly valuable but a handful I’d question.

From them we learn something about how teaching and learning issues unfold into urgent institutional problems; we develop an insight into the different ways academic developers read the rhythms of their contexts, draw on research, assemble arguments, and galvanise people and resources to reformulate and address the challenges before them. Most importantly, we get a sense of how a particular course of action is justified and argued for over others (p.67)

This to me positions ADs as providers of frank and fearless advice that draws on scholarly practices that senior academics and institutional management (generally the same thing) are more likely to respond to. It puts advocacy front and centre (alongside research) as a key practice of ADs. This is something that I’ve rarely seen specifically listed in job advertisements and position descriptions for these kinds of roles, although maybe it sits under ‘advise’. This certainly lends weight to my feeling that Peseta and the other authors largely see AD roles as being occupied by academics. This is extended in the discussion of the Norwegian paper

… we are privy to the insights of a very experienced group of academic developers and this shows in several ways: in their description of the political context and their participation in it; in their deployment of expertise (institutional know-how and educational research); their sense of what to argue for and what to withdraw from; and more generally, in the way they understand the possibilities and limits of academic development (through their choice of a sense-making framework: discursive institutionalism. This piece really shines when the sense-making apparatus kicks in: levels of ideas (policy, programme and philosophy); types of discourses (coordinative and communicative); and types of ideas (cognitive and normative)… It seems to me that one of the compelling lessons from this paper is about inducting academic developers into the scholarship of the field as an opportunity to debate and defend a set of views about higher education (p.68) (emphasis mine)

This quote leaves me a little unclear as to whether Peseta is suggesting that ADs should be inducted into the scholarship of the discipline being taught or broader scholarship about teaching and learning. (That’ll teach me to only read a summary of a paper and not the paper itself. Fear not, it’s on the long list). One question or idea that has come up a number of times in discussions within the TELedvisor community is whether academics need to better understand what edvisors do but I can see a strong case for going the other way. (Even when we assume that we know). If it is about delving into disciplinary scholarship (e.g. microeconomics) I’m less convinced, as much for the sheer feasibility of it all. Maybe being to ask questions about approaches to teaching and learning that align better to disciplinary practices and scholarship is a practical middle-ground.

Moving on to the study in the special issue by Debowski, Peseta notes a different strategic approach being taken by Australian ADs.

We find an Australian academic development scene keen on a model of partnership with its political allies: from external quality agencies to teaching and learning funding bodies. The politicisation is plausible enough but the distributed nature of the political game carries noteworthy and worrying epistemological effects. The first is that the job of academic development shifts to one of ‘translation’ and ‘implementation’, suggesting in part that the intellectual puzzles of learning and teaching in higher education have more or less been settled. Moreover the thorny and substantial issue of what (and whose) knowledge is being ‘translated’ and ‘implemented’ is left unattended. A second effect is tying oneself too closely to the external political game is that it can divert attention away from a commitment to the project of knowledge-making. (p.68)

Part of me has to wonder whether this different approach – between Norway and Australia – is reflective of national cultural characteristics or if it is simply a matter of the specific examples being examined. If my feeling that ADs don’t carry a lot of power in Australia is widely true, it would make more sense to lean on other authorities to help get things done.

Peseta draws her reflection to a close by reasonably asking

whether academic developers are eager to imagine themselves in the role of steward, where there is a job to be done in caring for the field – its history, ethics and politics – in ways that are future looking. It does seem to me that a condition of scholarship lies in academic developers’ disposition to scholarliness and scholarship, as well as a desire to know and immerse themselves in the peculiarities that comprise the field. If we are to better support academic developers in navigating the messy politics of the agency game, then we need more occasions to dispute, debate and deliberate on what it is that we offer learning and teaching in higher education. We need occasions to test our politics with others in and outside of the field. (p.69)

I would love to see this happening but having had a taste of institutional and academic culture where this absolutely does not happen, I can completely understand ADs wanting this but choosing to spare themselves from banging their heads against a brick wall. (And I thought I was going to be less cynical in this post). Maybe banging our heads against walls is a necessary part of a practice though.

I’ll wrap this post up with one more quote that I want to include but couldn’t find a way to fit into the discussion. I’ll certainly be reading more of this special issue as it clearly speaks directly to my research and hopefully I can also use it to spark wider discussion in the TELedvisor community.

What feels fresh and thrilling to me is that the lens of political ontology unlocks two important aspects of the work. First, it draws attention to the matter of justificatory politics, inviting us to interrupt the discourses that structure the accounts of our work as academic developers. While institutional capture provides academic development with much sought-after leverage and profile, it has the uncanny effect too of infantilising academic developers’ professional imagination such that our identities, values and actions can appear to outsiders as inseparable from what an institution requires. Second, the focus on ontology locates these interruptions as individual and collective acts of political agency, inciting us to lead more public conversations about our values at exactly the time when higher education’s purpose has multiplied. Without these conversations, there may be a temptation to position academic developers flexible and enterprising operators advocating on behalf of greedy institutions (Sullivan, 2003) regardless of their own professional and personal values. Many of us would baulk at this suggestion while reflecting on its distinct likelihood (p.66)

No punches pulled there.

 

Research update #35 – Writing like a proper academic

My writing style in this blog is intended to be conversational and focused on using the act of writing to help me to give form to my ideas. So sometimes it can be insightful and sometimes it can be somewhat more rambling. I’ve been very conscious the whole way through that this is not the style that I will need to employ when I’m actually writing my thesis.

Interestingly (perhaps) I had a bit of a mental to-and-fro in that last sentence between using ’employ’ or ‘use’. Nine times out of ten I would’ve gone with ‘use’, as I believe in simple and concise language but maybe because I’m thinking about how I will need to write in the future, I went with the more formal ’employ’. Or maybe the rhythm of the words worked better with ’employ’ as there is something strangely musical in language that seems important when I write. Anyway, I did mention that I can sometimes be rambly.

This self-consciousness about my writing style has risen up a little lately as I’ve been reading some of the blog posts of my SOCRMx colleagues. Many of them are doing the MOOC for course credit, so it could simply be that they are writing as they believe they are expected to or perhaps have gotten into the habit of doing, but it is still a style that I feel somewhat removed from.

Which is why I was happy to come across this post from one of my two favourite PhD gurus, Inger “Thesis Whisperer” Mewburn. With a title like “Academic writing is like a painful upper-class dinner party” you can probably work out where she is going with it. In a nutshell, her argument is that to be taken seriously in academia, you need to write like an “uptight white person”.

Meaning essentially that caution, nuance and form rule the day, with the choice of words offering worlds of hidden meaning about your actual, never to be expressed feelings. Using ‘assert’ rather than ‘argue’ is effectively a headbutt to the credibility of the author that you are discussing as it suggests that they are incapable of rationally supporting their idea and instead need to resort to an appeal to authority to make their point. (I have a feeling that I’ve probably used ‘assert’ at some point when I simply felt that I’d been overusing ‘argue’ so I’ll be paying particular attention here)

All of which brings me back to something that I’ve previously reflected on here, which is that your reader – and more importantly your reviewer and assessor’s personal tastes can carry far more importance in how your work is received than your ideas. I can appreciate that forms of communication evolve over time and become significant because they demonstrate an understanding of certain key concepts of scholarship but overall I find it a shame that vital ideas might be disregarded because they aren’t expressed in the appropriate fashion. A few commenters at the end of the post were outraged that Inger was reinforcing this dominant paradigm and vowed never to buy her book but I think they missed the point. Inger was talking about what is and they are focused on what should be. Her core idea was that communication should still be clear and accessible where possible but that it will be read in particular ways by an audience and it is important to be mindful of how that audience reads if you want to communicate with them.

She also includes a link to an incredibly handy verb cheat sheet divided by whether you think the work that you are describing is awesome, neutral or poor. She makes the point that this is written for research in her domain – part social sciences and part education – and people need to find their own but given that her domain is mine, I’m pretty happy to have it as a starting point.

Thanks Thesis Whisperer

Week #5: SOCRMx – moving into analysis

Maybe I simply don’t have enough experience in this area but I have to say that I’m struggling at the moment. I’m still pushing through the MOOC – alongside probably 3 or 4 other people still responding to the activities and posting in the discussion forum – but the lecturers seem to have gone MIA. There is no feedback from them on anything and I think that the rest of the people participating are mainly here because it’s a formal course-credit unit that they are undertaking.

(This is why their posts are so much better written and more deeply considered than mine but that’s ok)

There was a nice discussion of how data gets filtered early on though that I’ll quote:

Hardy and Bryman (2004) argue that some key dimensions of analysis apply across qualitative/quantitative approaches (pp.4-12) – including a focus on answering research questions and relating analysis to the literature; and a commitment to avoiding deliberate distortion, and being transparent about how findings were arrived at. They also discuss data reduction as a core element of analysis:

“to analyze or to provide an analysis will always involve a notion of reducing the amount of data we have collected so that capsule statements about the data can be provided.” (p.4)

So we’re starting to tap into the analysis side of things and have been asked to re-read the papers examined last week with an eye for how they approached analysis. The first is qual and the second is quant.

For what it’s worth, these are my responses.

Questions for discussion:

Why do you think Paddock chose narratives as a way of conveying the main themes in her research?

The research is about lived experiences – “a case study research strategy suits the imperative to explore the dynamic relationships between these sites”

What is the impact for you of the way the interview talk is presented? What is the point of the researcher noting points of laughter, for example? What about filler sounds like ‘erm’?

Helps to convey the voice of the subject and humanise them.

How does Paddock go about building a case for the interpretations she is making? How does she compel you, as a reader, to take her findings seriously? Share a specific example of how you think this is done in this article.

Ties it to theoretical concepts. They’re very uncritical about that sort of things I’m criticising in terms of the consumerist culture, cheap food, not worrying about where the stuff comes from how far it’s come or how it’s produced – is linked directly to Bourdieu’s Cultural Capital.

Interviewees use many emotive words in the excerpts presented here, but Paddock has focused in on the use of the word ‘disgusting’, and developed this through her analysis. How does this concept help her link the data with her theoretical perspective?

Used to differentiate class values

Paddock’s main argument is that food is an expression of social class. Looking just at the interview excerpts presented here, what other ideas or research questions do you think a researcher could explore?

Education, privilege, consumer culture

 

Overall I struggled with this paper because the author didn’t explicitly describe her analysis process in the paper. She just seemed to dive in to discussing the findings and how the quotes tied in to the theory.

Paper 2: Kan, M-Y., Laurie, H. 2016. Who Is Doing the Housework in Multicultural Britain? Sociology. Available: https://doi.org/10.1177/0038038516674674

 

The researchers here conducted secondary analysis of an existing dataset (the UK Household Longitudinal Study https://www.understandingsociety.ac.uk . What are some advantages and disadvantages of secondary analysis for exploring this topic? (hint: there are some noted at various points in the paper)

Advantages – Practicality, addressing issues not previously covered by the original researchers,

Disadvantages – data hasn’t been collected to respond specifically to the research questions,

How does the concept of intersectionality allow the researchers to build on previous research in this area?

Offers a new lens to examine relationships in the data

 

Choose a term you aren’t familiar with from the Analysis Approach section of the article on page 8 and do some reading online to find out more about what it means (for example: cross-sectional analysis;multivariate OLS regressions; interaction effects). Can you learn enough about this to explain it in the discussion forum? (if you are already very familiar with statistical analysis, take an opportunity to comment on some other participants’ definitions)

A cross-sectional analysis explores a broad selection of subjects at a certain point in time while a longitudinal study takes place over a significantly longer period.

How do Kan and Laurie go about building a case for the interpretations they are making? How do they compel you, as a reader, to take their findings seriously? Share a specific example of how you think this is done in this article.

I was concerned that correlation was tied too much to causation. In explaining some of the possible reasons for differences by ethnicity, broad claims were made about the nature of entire cultures that – while perhaps reflective of the quant data – seemed to have no other supporting evidence beyond assertion.

Research update #34: Learning little things

It’s been way too long since I’ve posted about my research and that’s not great. I seem to be finding a lot of legitimate seeming reasons to do other things – also getting sick – and things feel a little out of control.

There is an overall plan – I’ve booked in December to write the first draft of my research proposal, including most importantly the lit review and I think I’ll spend most of November getting ducks in a row for that. The SOCRMx MOOC is helping me to understand research concepts and language a little better and I feel like it will give me enough to get through.

One thing that I have been excited to find is what p means in statistical tables. As I’ve surely mentioned, I’ve not spent a lot of time studying research methodology and pretty well no time at all working on stats. I managed to work out that n is the number of participants in a study – however it was only this afternoon that I learnt that this is referred to as the frequency. But I’ve always been baffled by what the p column meant.

Turns out that it is the probability that the two associated variables in the table have no relationship – also known as the null hypothesis. So a p value of 0.623 means there is a 62.3% chance that the relationship between thing in the column and the thing in the row just happened by coincidence. I still have no idea how that score is calculated but little steps. 

This information came from a particularly user-friendly guide to research concepts at https://saylordotorg.github.io/text_principles-of-sociological-inquiry-qualitative-and-quantitative-methods/index.html (Chapter 14) 

This I found through the SOCRMx MOOC, which is still kicking goals. I’m getting into some of the assessable work now. Given that I’m not formally taking the MOOC – as in not being accredited for it – I’m toing and froing about the next piece of work, reading a 17 page paper about food and culture and then taking a quiz about my understanding of the research methodologies but I’m here now so I guess I might as well.

Reading some of the other student work – several people here are doing this as part of their coursework – I’m highly conscious of the fact that I still really don’t write in an ‘academic’ style and I worry that this makes me look dumb. For the most part it’s a political or ideological decision – I read something by John Ralston Saul a long time ago about how language is used by technocrats to exclude people and it’s been important to me ever since to be accessible in my communication. I realise that there is a need to be mindful of one’s audience as well and I know that when I do formal academic writing, I will be more ‘proper’ but for the here and now, I really just prefer natural language.

Week #4 SOCRMx – Reflecting on methods

This week in the Social Research Methods MOOC we take a moment to take a breath and consider the approaches that we currently favour.

One of the activities is to reflect in our blog – so I guess this is that. I’m looking at surveys because I still need to get my head around discourse analysis, not having really used it before.

Reflecting on your chosen methods

Choose one of the approaches you’ve explored in previous weeks, and write a reflective post in your blog that answers the following questions. Work though these questions systematically, and try to write a paragraph or two for each:

What three (good) research questions could be answered using this approach?

I’m fairly focused on my current research questions at the moment and I would say that using surveys will help me to start answering them, but I certainly wouldn’t rely solely on surveys. The questions are: How do education advisors see their role and value in Tertiary Education? How are education advisor roles understood and valued by teachers and institutional management? What strategies are used in tertiary education to promote understanding of the roles of education advisors among teaching staff and more broadly within the institution.

What assumptions about the nature of knowledge (epistemology) seem to be associated with this approach?

The main assumption is that subjective or experience based knowledge is sufficient. I don’t believe that this is the case. Clearly, a survey can be useful in collecting broad data about the attitudes that people claim or even believe that they hold however people can have a tendency to want to see themselves in the best possible light – the heroes of their own story – and responses might be more indicative of what people would like to think they believe than what their actions show them to believe.

What kinds of ethical issues arise?

This would depend on the design of the research. Assuming there is no need for participants to be subsequently identifiable, anonymity should enable respondents to express their opinions freely and without concern for consequences. Questions should be designed in a way that is not unnecessarily intrusive or likely to influence the way that respondents answer. I’d also assume that good research design would ensure that the demographics of survey participants is reflective of that community.

What would “validity” imply in a project that used this approach?

I would say that ‘validity’ would require addressing some of the issues that I’ve already raised. Primarily that the survey itself could be relied upon to collect data that accurately reflects the opinions of the survey respondents without influencing these opinions or asking ambiguous questions that could be interpreted in different ways. My overall preference would be for the survey to be one part of a larger research project that provides data from different sources that can be used to provide greater ‘validity’.

What are some of the practical or ethical issues that would need to be considered?

The survey would need to be anonymous and the data kept securely. Questions should be designed to be as clear and neutral as possible and a sufficiently representative sample of participants obtained. Given the number of surveys that people get asked to complete these days, ensuring that people have a clear understanding of the purpose and value of the research would be vital. For the same reason, I’d suggest that we have a responsibility to ask people only for the information that we need and nothing more.

And finally, find and reference at least two published articles that have used this approach (aside from the examples given in this course). Make some notes about how the approach is described and used in each paper, linking to your reflections above.

Mcinnis, C. (1998). Academics and Professional Administrators in Australian Universities: dissolving boundaries and new tensions. Journal of Higher Education Policy and Management, 20(2), 161–173.

Comparison of two surveys, one of academic staff (1993) and one of administrative/professional staff (1996). Analysis of results, some additional questions were added to the second survey

Wohlmuther, S. (2008). “Sleeping with the enemy”: how far are you prepared to go to make a difference? A look at the divide between academic and allied staff. Journal of Higher Education Policy and Management, 30(4), 325–337.

Based on an anonymous online survey of 29% of all staff – academic and professional at her institution, which included questions about demographics, perceptions of the nature of their roles, the ‘divide’ and the value of different types of staff in relation to strategic priorities.

Both surveys related to workplace issues and attitudes, which meant that privacy was a significant factor. I was less impressed with the approach taken by Wohlmuther, which I felt was overly ambiguous in parts.

“Survey respondents were asked what percentage of their time they spent on allied work and what percentage of their time they should spend on allied work. The term ‘allied work’ was not defined. It was left to the respondent to interpret what they meant by allied work” (p.330)

I do still think that I’ll use surveys as a starting point but expect to then take this information and use it to help design interviews and also to inform analysis of other sources of data.

Week #3 SOCRMx – Discourse Analysis

When I first stumbled across Foucault in some paper since cast to the depths of my mind, my immediate response was that it was wanky and unhelpful theoretical tosh. I’ll admit that I struggled to get my head around it but my broad takeaway was that it sat too far in the whole post-modern create your own reality school that has since brought us ‘fake news’ and Donald Trump.

Imagine my surprise then as I worked through the resources relating to Discourse Analysis – and particular five different theoretical approaches to doing it – only to find the Foucauldian Discourse Analysis might in fact be the closest thing to what I need in exploring the language used around Edvisors to see if and how it shapes their status and identity in tertiary education institutions. The other option is Critical Discourse Analysis, which kind of works in the same way but seems slightly angrier about it. Maybe not angrier but you seem to need to start from the position that there is an existing problem (which there probably is) and then dig into what you’re going to do about it. Both are on the table for now anyway.

The great news is that from what I knew of this a week ago – that it existed and a couple of people had mentioned that it sounded like what I wanted to do – I now think that can see why and how it might be valuable. Not that I know how to do it yet but that will come with time.

So once again the EdinburghX SOCRMx MOOC is coming through for me. I had hoped to have explored 2-3 additional topics by now but came down horribly sick late last week and am barely just functional again now.

For what it’s worth, here are my other scratch notes on Discourse Analysis taken from the course so far:

Qualitative approach to the study of language in use – spoken or text.

Covers diverse sources from interviews/focus groups to secondary material such as archival material, policy documents, social media and so on.

Various ways of doing it from the micro (sentence by sentence) to the macro (overall impact of how language is used) depending on the theoretical framework chosen.

References: Discourse – David Howarth and Analysing Discourse – Norman Fairclough (more practical)

Common criticisms of DA – it’s idealist (the world is just a product of our minds) and relativist (anything goes). Also that Discourse Analysts confuse changing the way that we talk about a thing with actually changing the thing itself. Maybe, maybe not.

“Critical discourse analysis is actually really interested in the ways in which systems of representation have actual material effects and asymmetrical effects on the distribution of burdens and benefits on particular social groups, access to resources and so on and so forth” (MOOC video introduction)

There are many different types of discourse analysis, including conversation analysis, which analyses talk in detail (see Charles Antaki’s excellent web site for a good introduction to conversation analysis), and critical discourse analysis, which pays particular attention to how relations of power and domination are enacted through discourse. “”

An important aspect of discourse analysis, for our purposes, is that it treats language as action. As Gee puts it, language “allows us to do things and be things… saying things in language never goes without also doing things and being things” (Gee, 2011, p.1). It also places importance on context: “to understand anything fully you need to know who is saying it and what the person saying it is trying to do” (ibid, p.2).

Not Conversational Analysis for my work

Critical Discourse Analysis – about power relationships and social issues. Almost seems too loaded? Documents that seek to present particular political positions

Foucauldian Discourse Analysis might be relevant – how language shapes identity

There was also an assignment for us to try it out with. One of my major interests is job advertisements, which is perhaps not the best place to start given how formalised the structures of these things are but I did it all the same. Outlaw Country!

This is the sample text:

*This is a new open-ended, part-time (0.5 FTE) post in the E-Learning Development Team, which has been created to support the development of the University’s online distance learning provision. The role holder will provide application management support to academic programme teams for the delivery of fully online courses. In the performance of these duties the role holder will coordinate the registration of courses, students and staff on the University’s Canvas learning management system (LMS).

The post will provide first-line user support to staff and second-line support to students, responding to queries on the Canvas LMS. The post requires a combination of good technological skills, awareness of course and user administration processes and expertise in delivering training and support services. Creative approaches to problem solving and the ability to learn and apply new skills quickly will be necessary, as well as good organisational skills, excellent interpersonal skills and above all, a strong commitment to customer service.

The role forms part of a small team working to the highest standards and best practices for online learning. You will be expected to work on your own initiative, leading staff training and user support services, as well as working effectively within a team.*

These are my responses.

1.Significance: The nature of the text is highly specific and directive. The requirements expected of the reader are made explicit with the use of terms like “The post requires”, “will be necessary” and “ you will be expected”. As a job advertisement this is fairly standard language. The use of “and above all” gives extra weighting to the need for a “strong commitment to customer service”

2. Practices: This text is being used to describe a recruitment process

3.Identities: This text describes in detail the characteristics that the (suitable) reader should possess and explicitly states their relationships with other people and groups described. This positions the writer very much as the person holding the power

4.Relationships: The text defines the relationship between the reader (if successful) and stakeholders in the university and also the relationship between the reader and writer (employee/employer)

5. Politics: The nature of a job advertisement is to describe ‘how things should be’. It broadly pushes a line that the institution cares about quality teaching and learning and also quality customer support.

6. Connections: Everything is relevant to everything else in this piece of text because it has a singular focus on the specific goal of recruiting the right person.

7. Sign systems and knowledge: Some of the language used assumes that a certain type of knowledge relating to technology enhanced learning is possessed by the reader. It is heavily factual and not supportive of different interpretations of what is written.

 

I don’t know if I’m ‘doing it right’ particularly but it did make me think a little more about the nature of the power relationships expressed in job ads and the claims that they make to reflect an absolute truth in reality. So that seems like a thing.

I haven’t taken a look at the discussion posts for the other topics but the fact that there are only 3 other posts about Discourse Analysis in this MOOC after 3 weeks makes me wonder whether it’s simply a topic that people are engaging with or whether people aren’t really engaging with the MOOC overall. Hopefully it’s the former, because I’m getting a lot out of this.