The reusability paradox – WTF?


The reusability paradox.  How can reusability be bad?

When first presented with this concept last year, I must admit I really did struggle with it.  As a techhie, every fibre of my being compels me to focus on reuse.  Hence, the paradox.  After some weeks of struggling with the reusability paradox, it did start to make some sense, emphasis on some’.

I have recently revisited this concept, both in discussion with my (to be) PhD supervisor, but also in my day-to-day work as an Educational Developer/Lecturer/Educational Technologist.  My revisit has prompted this blog post as a way of recording some connections I have made to real-world examples of this phenomenon, and how this impacts my thinking about technology (re)use.  This thinking is far from crystalised.

David Wiley explains the reusability paradox in the context of reusable learning objects, and more broadly, the open content movement.  When this concept was initially presented to me, it was already positioned in terms of technology.  I find it easier to start with the original context in learning design.

What is the reusability paradox?

David explains it quite succinctly as:

A content module’s stand-alone pedagogical effectiveness is inversely proportional to its reusability.

He explains that the more contextualised a learning object is made, the more meaningful it becomes to that context.  However, it also means the learning object becomes less reusable to other contexts.  We have a trade-off situation – effectiveness (in learning) vs. efficiency (in scalability). David concludes:

It turns out that reusability and pedagogical effectiveness are completely orthogonal to each other. Therefore, pedagogical effectiveness and potential for reuse are completely at odds with one another, unless the end user is permitted to edit the learning object. The application of an open license to a learning object resolves the paradox.

I don’t think an open licence alone will resolve the paradox, but that is a discussion for another post.

The reusability paradox in the wild

So enough of abstract concepts – how does the reusability paradox play out in the wild and in other ways besides learning objects?

“I see dead people the reusability paradox.”

I often see the reusability paradox when working with lecturers – conceptually the same as David Wiley explains, but at a higher level.  My particular experience relates to the contention of reusing units of study between different awards/degrees.  This is pretty typical in the STEM areas – in my institution we refer to them as service courses (units).  I work with a science school, and a key foundation unit of study taught from the school is anatomy and physiology.  There would be a dozen or more degrees that require students to have a sound knowledge in this area.

Conventional management wisdom seeks to reuse anatomy and physiology units for health related-degrees.  This is efficient use of resources, right?  And “why re-invent the wheel?”

But before I explore those questions, let’s first take a step back for a moment.

The key criteria for reuse is applicability to other contexts.  If there is sufficient overlap or congruence with another context, then a reusability factor could be considered high, thus worthy of reuse.  Learning is very contextual, particularly when you factor, as David does, the underpinning of constructivist learning theory.  Learners construct new knowledge, upon their own existing knowledge.  This is very individualised, and based on each learner’s past experiences, and ways of thinking.

Learning designers have some tricks to help deal with such diversity, such as researching your cohort, conducting a needs analysis, and ultimately categorising learners and focusing on the majority.  Clearly, this is flawed – but this is how massification of education works.  For instance, if you are preparing a unit of study for nursing students, then you can make some reasonable assumptions about those students motivations (i.e. they want to become a nurse); their prior formal learning (i.e. previous units studied within a structured nursing curriculum); and even down to smaller groups such as pathways to study (i.e. were they enrolled nurses – ENs or school-leavers). These assumptions of course aren’t always correct.  Nevertheless, the key point is that this unit of study is reused by all nursing students studying for the Bachelor of Nursing degree.  A more or less reasonable trade-off between effectiveness and efficiency.

So let’s return to the example of an anatomy and physiology unit of study.  In this instance, we see different discipline areas, albeit health related, attempting to reuse a unit of study.  Despite all being health related, a paramedic student’s needs aren’t the same as physiotherapy students’, or medical science students’.  And while some disciplines hail from within the same school, others disciplines are situated elsewhere within the organisational structure.  Now, consider the diversity of the cohort.

So to cope with this type of diversity, I typically see three approaches:

  1. Make the unit of study as abstract (decontextualised) as possible making no assumptions about learners or their backgrounds, and “teach the facts”.
  2. Design the unit to cope with the highest represented context (i.e. the discipline with the most students).
  3. Design the unit of study to address multiple contexts, in an attempt to make it meaningful to multiple disciplinary groups.

In other words, make it meaningful for no-one; make it meaningful to the biggest group, and nobody else; or, try to make it meaningful for everyone.

Approach 1 is obviously ineffective, especially considering constructivist thinking.  You end up with students asking “why do I need to know this?”, or “that course was so dry and boring.”

Approach 2 while not quite as flawed as approach 1, can be less than ideal.  Particularly when the highest represented group is small compared to the entire group.  In such cases, the other groups feel marginalised, “I want to be a Paramed, not a Physiotherapist.”

Approach 3 can also ineffective because you can end up with a study unit that is incredibly complex.  This group of students does this, that group does that.  As the lecturer, you have to manage the mixture.  The students too can become confused about requirements. You can also run into “equity” type policy constraints, such as “all students must do the same assessments.”  This is an important point.  If you end up with such complexity, you really have to ask the question, “why not just have separate units of study?”

But solving this challenge isn’t the focus of my blog post.

The Reusability Paradox as it Applies to Education Technology

So does the concept translate to technology?  Yes it does!  And similar issues arise as a result.

Recall the three approaches I see people use to deal with the challenges of reuse for multiple contexts?

  1. Make abstract
  2. Contextualise for the largest group
  3. Contextualise in multiple ways for multiple groups

Let’s consider Approach 1

David Wiley says of the reusability paradox:

The purpose of learning objects and their reality seem to be at odds with one another. On the one hand, the smaller designers create their learning objects, the more reusable those objects will be. On the other hand, the smaller learning objects are, the more likely it is that only humans will be able to assemble them into meaningful instruction.

I think this statement has some “translatability” to an education technology context as:

On the one hand, the smaller developers create their learning technology tools (e.g. programming libraries rather than complete systems), the more reusable those tools will be. On the other hand, the smaller learning technology tools are, the more likely it is that only developers (and not designers) will be able to assemble them into functional learning technologies.

David Wiley also says:

To make learning objects maximally reusable, learning objects should contain as little context as possible.

To remove context is to make something more abstract – to take away intrinsic meaning or specific function.  Indeed this makes things more reusable, it also requires re-contextualisation.  In the context of technology, abstraction leads to dependence on the developer.

Let’s skip to approach 3.  With this approach, we end up with technology that attempts to do everything for everyone.  These technologies become so complicated to use, that people simply don’t use them.  My favourite example of approach 3 is the Moodle Workshop activity which is a “powerful peer assessment activity”.  I consider myself to have a reasonable grasp of technology, and yet after 45 minutes of tinkering with the workshop activity in Moodle, I gave up.  I have only seen 1 person at my institution use it.  It has so many options, too many options, because it tries to account for all the different ways one might attempt to embed peer assessment into their course.

So what about approach 2?  We reuse a learning technology without change – meaning it is focused on the majority of requirements (however that might be determined).  This is typical of COTS (commercial off the shelf) solutions.  This inevitably leads to functional gaps – “the system does this, but I want to do that.”  If the gap is substantial, it can lead to workarounds.

Does technology need to be reusable?

This is where I struggled last year with the reusability paradox.  If you can’t reuse a technology, then isn’t that a serious limitation?  Management are constantly looking to replicate successes – “This worked so well, so how can we use this in other areas?”

When I am creating/adapting/augmenting technology for others, I have to demonstrate “bang for buck” in terms of my time invested.  Does what I create, at least have to pay for itself in affordances?  I normally look for economies of scale, and the obvious way is through reuse – it is usable by X number of people.  Management/decision-makers get this – easy. However, technology can offer other economies.  For instance, depending on the technology, it may instead allow a specific group of people to do something much better, quicker, cheaper, or if its very innovative, something they couldn’t do before.  But that something might be very specific, so specific that it isn’t very reusable, and limited to a small audience.  Yet, if it still yields a net gain, is that bad?

What if a technology is so specific, it’s designed for just one person – yourself?


At some stage in our lives, we have all had to engage with some form of workaround to get from A to B.  Not just in terms of technology but life in general.

If you create a workaround, does it need to be reusable?  Perhaps not.  But what if you want it to be?  How can you go about it?

This is where my time (and thinking) ends for now.


Week 6: OERs – Reuse…Revise…Remix…Redistribute…

This blog post relates to my study of Open Educational Resources as part of my Emerging Technologies for Learning Program of study at the University of Manitoba. Our instructor has asked “How does [internationalisation & localisation] apply to OERs? And how can you adapt your own OER content to address issues of local and foreign culture?”

As with the creation of any artefact, consideration of the intended audience is paramount.  What do you assume that they already know? What do they need to know?  What are their life experiences? What is their cultural background? How will they use the artefact?  In terms of OERs, one of their strengths is the licencing that enables you to repurpose, revise, remix, and redistribute taking into account the context in which the body of work is to be used. So I guess the trick when producing OERs is to design them such that they are as easy as possible to repurpose for different audiences, rather than trying to make your work accessible to everyone.

The localisation of work is not necessarily limited to a region or ethnic culture.  It can in-fact include organisational cultures.  I am considering my final project for the course and what body of work to produce.  I’d like to create something applicable to my place of work.  This means ensuring it is localised to my workplace culture, and aligns with the organisational goals, language (what organisations don’t have their own acronyms and idioms for instance?), facilities and so on.  So re-purposing an OER can mean combinations of reuse, revision, remixing, and redistribution such that the final product meets an organisational need.

Factors to consider when localising content can be obvious such as language.  If an artefact was written in Spanish as an example, it would be completely inaccessible to the likes of myself who can speak nothing other than English.  While other factors are far more subtle, yet still significant.  For instance, it is common to use an analogy (or examples) to teach a new concept or idea by drawing a parallel between a known concept and a new one.  What if the concept you assume to already be known by the learner is not known at all?  So your choice of analogy must be localised to match the context of the learner, or else it becomes meaningless. While there are a growing number of software tools available that will translate one language to another, the more subtle nuances such as analogies embedded within bodies of work are harder to address. Returning to my initial point of designing OERs such that they are easy to repurpose for different audiences, it would be useful to be able to mark-up within an OER, elements that are contextual, such as analogies so that they can be interchanged to meet the needs of a particular audience.  So when translating a body of work from one context to another, these marked-up areas can be replaced with something more meaningful for the intended audience.

Week 5: OERs – information accuracy and integrity

This blog post relates to my study of Open Educational Resources as part of my Emerging Technologies for Learning Program of study at the University of Manitoba.  This week our instructor has given us free reign for our weekly blog topic, in part asking “What are some of the issues that bother you about OER?”

Reflecting on the past 5 weeks of my study I recall my very first blog post and a comment by a colleague and good friend David Jones.  David suggested one of the problems for OERs was that everyone has their own preferred method of introducing a topic, and so there is a predisposition to creating anew rather than re-using or re-purposing resources. This fundamentally undermines the principles of OERs.  While I did concur with David’s comments, and drew a connection with this idea, and more broadly the ideas of George Siemens around Groups and Networks, I do have another related issue that I see for the future of OERs.  The predisposition of the teacher is only one small (but significant) part of a broader collection of varying factors that influence the design of an OER.  These varying factors generally are what I would call the learning context.  I have blogged quite a bit about the significance of learning context in the past.

I believe the context of the learner is a critical input to the design of any learning artifact.  You wouldn’t create an online course for learners with poor Internet connectivity.  When doing instructional design, a typical

There are just so many factors to consider when designing a resource, not least of which the learners themselves.  Learner demographics, their previous knowledge and experience, their motivations for study, their work and family commitments, their culture and nationality,  their access to technology, their competence with technology – there are so many dimensions.  In most cases, you can only speculate on some of these matters, but they all have an influence on the outcomes for the students.  When you are looking for OERs, you may need to contextualise them for you and your students.  Depending on the variation of context, a consider re-write may be necessary to make the OER accessible to your own students.

There are also institutional factors to consider including your institution’s attitude towards OERs, copyright policy, publishing platforms (mobile devices, hardcopy print, LMS and so on).

My concern is that variations in learning context may significantly limit re-use of OERs.  I have previously commented that to maintain a healthy learning environment, it is important to have a good balance of re-use and adaptation of OERs:

Too much re-use will result in assimilation of ideas which can stifle innovation and stalls evolution. Too much creation anew or even adaption to an extent will limit the benefits of OERs in terms of sharing the costs of development of such resources, as you are constantly re-inventing the wheel.

It is the latter that I am concerned will be he downfall of OERs.

Further questions put forward by our instructor for this week are “Should some OERs be ‘official’ and others ‘unofficial’? Why? Should this be a question to ask?”

This goes to the integrity of OERs, but perhaps the question should be “how do we educate educators to write professionally in an information age?” My class-peer Leah has written an excellent blog post commenting on attitudes towards the citation of wikipedia in scholarly articles.  The academy generally frowns on the use of Wikipedia.  Yet, Leah quite rightly makes the point that many Wikipedia articles are critiqued by many more people than  any “official” publications.  Even Google has a similar opinion.  Like any source of information, it needs to be evaluated in terms of accuracy, authenticity and integrity – all important information literacy skills of the 21st Century and something that we should all do when reading/viewing/listening online.  It is a self-publishing world.  Consider the authorship of the article, the history, and the citations contained therein on which the article’s content is derived – all prominently available in wikipedia.  Then make a reasoned judgement on the credibility of the information and if it checks out, why not use it.

Learner Autonomy, Control and the Balance of Power

This blog post relates to my study of CCK11.

I have been struggling with how to express my view of the future role of educators in the 21st century.  I have had an idea that centres around learner centred, control and individualism, but simply haven’t been able to articulate this in my writing.

I have just listened to the Facilitator’s elluminate session for the 11th of March.  I am excited to say that after listening to this session, I think I have figured it out, and it is with the help of the participants and the facilitators.  So this article is my first attempt at putting into writing what I believe is the future role of the educator.

Towards the end of the elluminate session, discussion centred around learner empowerment.  The class was asked “What Can Educators Do to Empower learners?”  Many responses included the idea that learners should have choices and control over learning.  Stephen provided a quote from an article by Tony Bates that reviews an article by Sarah Guri-Rosenblit and Begoñia Gros where they state:

… the time seems ripe to acknowledge the fact that putting the students in the center of the learning process, and assuming that the information and communication technologies have the power of turning them into self-directed and autonomous learners have turned out to be quite naïve and unsubstantiated assumptions.

Stephen’s interpretation of the article is that in order to educate people properly, you have to exert power and control.  This then implies the above idea of empowerment as incorrect.

So it would seem that there are two opposing positions.

  1. That learning should be learner focused, and controlled.  Learners decide for themselves what they need to learn, and how to learn it.  Learners are self-sufficient & autonomous.
  2. That learners are incapable of managing their own learning and therefore must be managed and controlled by the teacher – by an expert.  Learning should produce consistent outcomes to assure competency.

Is this a dichotomy?  Funnily enough, a participant in the elluminate session made the point: “its not either / or”.

I have this little philosophy that when faced with two extremes, often (but not always) the answer is somewhere in the middle.  In this case, neither extreme is ideal, so the hard part is finding that middle ground.  The middle is a compromise in gaining most of the benefits of each extreme, with the least of the drawbacks. In this vein, I can see benefits and drawbacks from both positions above.  Too much control and learners become stifled, constrained, inculcated – they become a cog in “the [education] system”.  Too little control and in some circumstances, the learner may be unable to manage their learning to achieve their goals.

So from my perspective, learning can be managed and controlled by a teacher to the extent that it is necessary.  Leading into adult education, teachers and learners should work together to determine when this is necessary and to what extent.  A partnership if you will.  It is necessary when the learner does not know sufficiently enough to make informed decisions about how they go about learning something.  The old adage, “you don’t know what you don’t know” fits here for example.  Think of this level of control as a bootstrapping process (if you are knowledgeable of computers). Wikipedia describes bootstrapping (or booting a computer) as “a technique by which a simple computer program activates a more complicated system of programs.”  This is part of a computer’s startup process.  The teacher provides the simple (or not so simple) computer program that activates a more complicated system of programs – self-learning.  Put another way, the teacher provides the structure to assist the learner in making good decisions about how to learn what they wish to learn and achieve through the learning.  Depending on the context, this may be little or no assistance through to continuous and comprehensive management and support of learning.

Guri-Rosenblit and Gros continue in their concluding remarks: “Most students, even digital natives that were born with a mouse in their hand, are unable and unwilling to control fully or largely their studies.”  I have blogged previously on the notion of learner management in the context of PLEs/PLNs, but I believe it also fits here.  The excerpt below from my article is in response to the suggestion by Educause that “… less experienced students may not be ready for the responsibility that comes with building and managing a PLE”:

Managing one’s own learning is not a trivial task – it’s a big responsibility.  Is it reasonable to expect that everyone be able to manage their own learning to this level of detail?  A noble vision, but is it practical or reasonably attainable, or simply a fairy-tale view of education? … I believe this downside is understated, and why I don’t believe this ideal [PLEs/PLNs] is realistic in a global way – a panacea.

Younger learners will require much more bootstrapping than more mature learners – generally. 🙂  Another trend relates to the motivation of our learners.  Why are they learning something?  Is it to satisfy a burning desire or to attain a piece of paper to get a job?  Is it intrinsic, or extrinsic motivation.  Consider the example used by John Biggs in his theory of Constructive Alignment.  He described two very different students as I explain in my review of his book Teaching for Quality Learning at University:

Biggs introduces two student characters that represent two distinct groups of students that comprise a class.  They are also featured in a short film titled Teaching Teaching & Understanding Understanding.  Their names are Susan and Robert.  Susan is the typical academically minded student.  She comes to classes prepared, including pre-reading class materials, reflection on this material, and questions about her understanding of it.  Then there is Robert.  Robert is characterised as a student who is there out of necessity rather than desire.  He only wants to achieve sufficiently to be able to get a good job.  The course he is doing may not have been his first choice.  He comes to class with little preparation or prior reflection.  He hopes to rote learn and memorise to be able to pass his course.

Robert is not ready to manage his own learning – to be an autonomous learner, and requires considerably more bootstrapping than does Susan.  Susan is motivated to learn, rather than obtain a piece of paper (qualification).  Susan is better prepared and motivated to manage her learning and be autonomous.  She will require less bootstrapping because she is intrinsically motivated to take on the role of being an autonomous learner.

But bootstrapping only provides the contextual knowledge and structure required to support learners to the point that they can autonomously carry on and report back if necessary.  The skills to be autonomous and self-sufficient must also be learned.

This is where I believe our modern education system is letting down society.  The balance isn’t right.  In modern times it is becoming increasingly focused on control and measurement, particularly in K-12, to the detriment of broader skills such as learner autonomy.  The net effect of this focus is task corruption.  It’s no longer about the learning.  Teachers are focused on the measurement.  They are teaching to the test.  As learners move into higher education, they have been conditioned to do the same – learn to the test.  How many times have you been asked, “do I need to know this for the exam?”  So we have our measurement, the learner can do xyz in a classroom with an invigilator, pen and paper, and a wall-clock.  Rowntree said of exams, as quoted by Phillips:

The traditional three hour examination tests the student’s ability to write at abnormal speed, under unusual stress, on someone else’s topic without reference to his customary sources of information, and with a premium on question spotting, lucky memorisation, and often on readiness to attempt a cockshy at problems that would confound the subject’s experts

Is this how we perform in the real world?  Modern education is an assembly line – a sausage factory, churning out shrink-wrapped uniform graduates, with a GPA stamped on their forehead, in the name of quality and standards.  I acknowledge that graduates need to differentiate themselves and that employment is a competitive market, but when you are learning to a test, ultimately how meaningful is a GPA?  My point is that we are too focused on measurement.  We need to get the balance right.

I recently commented on Stephen Downes’ article 10 Things you really need to learn. With the exception of reading, none were integral components of my formal education.  Yet, they are sound in my view because they develop your ability to be self-sufficient – to be an autonomous learner.

So my hope for the future of education is that we can get the balance right.  That learners are sufficiently supported and encouraged to develop the life-long skills of learner autonomy and learning management.  Yet, there are also appropriate structures – a bootstrapping process to help learners make their way and achieve their goals, whatever they happen to be (personal enlightment, or a decent job).


Groups and Networks

This blog post relates to my study of CCK.

In the week 5 material for the course, I have watched a presentation by George Siemens relating to groups and networks.  I really enjoyed watching this presentation, as much of the content resonated with me and my context.  I am blogging some of the more fascinating concepts that George highlights in the presentation.

Connectives: autonomy of self (mosaic)

George talks about human nature.  While we like to be social and be part of things larger than ourselves, such as groups, networks and so on, we also have a desire to retain in part, our own sense of self.  To have some level of autonomy, and individualism, and recognition or ownership of our own contributions to the network.  When engaging with networks largely this way, George describes these people as connectives. George has used the analogy of a mosaic, which I like.  What comes to my mind is a patchwork quilt – connectives contributions aren’t always the same (different colours & textures), and don’t always neatly fit together (jaggered edges), yet you still have a whole (patchwork quilt – network).

In networks comprising of mostly connectives, there is greater diversity of views and ideas and greater autonomy.  The network is less integrated and co-ordinated.

Connectives retain a sense of sovereignty within the larger group.

Collectives: subsumption of self (melting pot)

As connectedness grows stronger, the diversity of views and ideas normalise into collective views and ideas, with a loss of autonomy, but become more co-ordinated and integrated.  Co-ordinated in the sense that there is common understanding, common goals and common views.

When engaging in networks in this way, you are known as collectives.

So following on from the patchwork quilt analogy of connectives, a collective is a quilt that is uniform in colour and texture.  Focusing on the colour, it is derived from the colours of each individual contributor, but unlike connective quilts (patchwork), collective quilts converge to the one shade.

Achievement of the complex

George talks about coercion to the norm in group environments.  Connectives who express different views or ideas from the norm of the group are pressured to assimilate to the group views.

This presents challenges in the achievement of complex tasks that require groups to work together.  There needs to be a level of trust, and some level of common understanding and agreed goals amongst the group.  But at the same time, it is important to fulfill the needs of human nature and retain some levels of autonomy and individualism.

I think an excellent example of this balance between connective and collective group engagement is the continent of Europe.  Europe is comprised of many different countries, all with their own cultures and attitudes and yet, Europe can also function as a whole through the European Union.  Take for example, the adoption of the Euro as a continental currency.  There was great benefit to the individual countries of Europe to have a common currency (global strength compared to $US and GBP).  However, if you take a look at the physical currency (ie coins and notes), they share the same size and shape, but the imprints are different – individual.

Innovation is deviation

This would be my favourite idea presented by George.  In a collective, where there are agreed views or ways of doing things, the suggestion of doing something different is often seen as a threat.  To innovate deviates from the norms of the group.  Yet innovation is a crucial part of any group – it is what keeps minds open, and possibilities possible. It also distinguishes individuals and groups from one another.

I see this in my workplace all the time.  My workplace I’m sure is not unique in this regard.  Those who deviate from commonly held beliefs or ways of doing things are shunned, or marginalised.  I have seen this happen to a former colleague.  Yet their contributions (as connectives) are incredibly valuable to the group or network.

Freedom vs. Control

Again, its about context.  The types of connections required to achieve certain outcomes are defined by the context in which they are to occur.  If you need to distinguish yourself from your competition for example, then a certain level of freedom is necessary to operate outside of convention to discover new innovations.  However, if working to a specific goal that must be shared amongst a collective, then a level of control is necessary to ensure the goal is met.

This was a fascinating presentation, and it resonnated with my life experiences considerably.

Damien Clark.

My position on Connectivism

This blog post relates to my study of CCK11, and is my submission for assignment 1 – my position on Connectivism.  As the word-limit is quite low, I’ve linked to previous blog posts which provide greater depth of discussion and links supporting my assertions.

Clarify and state your position on connectivism

I was very excited to be doing this course. I was introduced to Connectivism in my instructional design course as part of my program with UManitoba back in 2009.  At that time, I was unsure about Connectivism and wanted to learn more before forming an opinion on its validity as a learning theory.

My current role with my employer is an instructional designer.  My current value system for learning theories centres mostly on usefulness.  At this stage, I’m not convinced of its usefulness in terms of underpinning a learning design.  This isn’t to say that its not useful, I just haven’t enough experience with it to say that it is.  So I’m saddened to say that after 5 weeks studying Connectivism, I’m still largely a fence-sitter.  Hope this is okay George. 🙂

For me, I don’t think of learning theories in absolutes.  My view is that each learning theory is valid and useful, for given contexts.  I have blogged extensively on this view over the past couple of years, increasingly so in the past weeks.  I found a real nugget in a video by Ian Robertson that provided concrete examples to illustrate my view about context and learning theories. In this blog post, I reflected on what I thought were the right (and wrong) contexts for Connectivism where a primary factor (at this point) is technological accessibility where making connections is not so easy.  This is based on the importance George has placed on technological advancement as a primary driver for considering a new theory for learning.  Another significant factor is the discipline or focus of the learning, which I consider a weakness of the theory and discuss in greater detail later in this article.

Is it a new theory of learning?

For me at this stage, the stand-out elements of Connectivism that are novel are:

  1. Learning may reside in non-human appliances
  2. How can we continue to stay current in a rapidly evolving information ecology?
  3. Currency is the intent of all connectivist learning activities
  4. Decision-making is itself a learning process
  5. Capacity to know more is more critical than what is currently known
  6. How do learning theories address moments where performance is needed in the absence of complete understanding?

These aspects are the ones that resonate most with my life experiences as a learner.  However, these experiences have been very natural and organic.  This course as a MOOC is pseudo-organic.  Everybody has assembled to learn about Connectivism, but the learning is driven by a daily email digest, not purely by one’s own curiosity or need to solve a problem.  My reflections on this MOOC are detailed in a separate blog post.

Returning to the stand-out principles for me, I’d like to unpack these a little more…

Learning may reside in non-human appliances

For most of my adult life, I have been using computers to organise my learning.  It has become an integral part of how I learn.  Whether it be storing information, finding information, reflecting on ideas, sharing ideas, feedback and so on.  For many years, I rarely bother to commit to memory knowledge – I have honed my skills in being able to find it when and where I need it.  If I need to remember the switches to a UNIX command, I access the online manual (using the man command).  If I want to recall my previous thoughts on a topic, I refer to my blog.  If I need to follow a policy for a task at work, I search the policy portal.  The technology becomes an extension of my learning.  It’s more about learning to learn and self-sufficiency.  I recall George commenting that he would be lost if he were to lose the information on his computers, because it has become a fundamental element of how he learns.  I hope I have paraphrased that correctly George. 🙂  I feel exactly the same way.

How can we continue to stay current in a rapidly evolving information ecology?

I have been working in the IT and education industries for 15 years.  Both are very evolutionary and constantly changing.  From the beginning of my working career, I have had to develop strategies for this challenge.

Currency is the intent of all connectivist learning activities

This links to the previous paragraph – it’s all about remaining current in an evolutionary environment.  How can I systemically remain current in a rapidly changing environment.

Decision-making is itself a learning process

Again, this links to the previous paragraph.  Deciding what to learn and how deep to learn it is a critical factor in an age of information abundance.  Is what I learn today going to be applicable in the near future?  You need to constantly reflect upon what you believe to know – challenge previously held assumptions in the light of perpetual change.  This too has linkages with Dave Snowden’s view that we are pattern-matching intelligences, rather than information processing intelligences.

Capacity to know more is more critical than what is currently known

Again, a symptom of evolving contexts and related to decision-making.  What has worked in the past may no longer work due to changing context.

How do learning theories address moments where performance is needed in the absence of complete understanding?

This I can identify with again and again.  There are very few tasks or projects that I have worked on where I have known all that I need to produce a satisfactory output.  In my work history, there is very little repetitiveness – almost every day is a new challenge requiring me to develop new skills, ideas, ways of seeing the world.  I can only see this trend continuing.

What are the weaknesses of connectivism as formulated in this course?

Like all existing learning theories, their application is contextual.  I don’t think George considers Connectivism to be the silver-bullet of learning theories, and really its not.  Its just a theory that incorporates the information era of the 21st century and responds to the challenges of learning in this era, plus leverages the affordances of the technology of the time – global interconnectedness.

At times I wonder whether the discipline or topic area suits this style of learning design more so than another. Suifaijohnmak has written an article where he says:

… under a networked learning approach, where diversity of opinions are welcome in a MOOC, then tensions amongst different “voices” seem to be a natural emergence from the networks … This seems to be a natural opposite from the traditional “group” or “team”, or even the Community’s views where consensus and agreed goals are the norms rather than exception.

How do we know if diversity of opinions is the best way to learn under a networked learning ecology (or with internet)?

How do we know if diversity of opinions is the best way to learn full-stop?  Does learning and knowledge [always] rest in diversity of opinions?  Especially when you consider the traditional working environment is more about groups and teams working towards agreed goals.  Again, it depends on context.  Are we discussing facts or ideas, for instance.

What are your outstanding questions?

Continuing from the previous section, I’m curious as to what a connectivist learning design would look like for a course teaching a more hardened science, such as physics, chemistry or computer science.  I have asked George this question in an Elluminate session, but his response at least for me did not solve my dilemma – how do I apply this theory to more diverse contexts?  Learning isn’t always about sharing opinions.  Many of these disciplines are objective – a solution is either right or wrong.  The value of opinion (in my opinion) is significantly lower than in topical areas that are more culturally influenced, such as education – softer sciences if I may, just as an example.