Docear on macOS: Navigating the Apple Java Nightmare

So this blog post will be brief, as I suppress my disdain for Apple’s attitude towards Java, now that they no longer need it to survive due to their App Store ecosystem.

This blog post documents how I was able to get the Docear Mindmapping software working on OSX El Capitan (10.11).

Sadly, many Java applications which should ‘run anywhere’ simply don’t anymore on the Mac.  Apple abandoned their own internal version of the JRE and bundling with the OS, and deferred said support to the author of Java, now Oracle.  Oracle, too have contributed to the nightmare in providing little assistance in making the transition seamless.

After installing Docear on my mac, I attempted to start it with the icon in the Applications folder in the usual way only to find it bounce in the dock once, and disappear.  How rude I thought.  So I resorted to the command line (gotta love UNIX-like desktops).  When I attempted to start it there, I was presented with the ugly error message:

$ /Applications/
JavaVM: Failed to load JVM: /Library/Java/JavaVirtualMachines/jdk1.7.0_60.jdk/Contents/Home/bundle/Libraries/libserver.dylib
JavaVM: Failed to load JVM: /Library/Java/JavaVirtualMachines/jdk1.7.0_60.jdk/Contents/Home/bundle/Libraries/libserver.dylib
JavaVM FATAL: Failed to load the jvm library.
[JavaAppLauncher Error] JNI_CreateJavaVM() failed, error: -1

Just lovely.  I actually thought I was running Java JRE 1.8 that I downloaded from Oracle.  Seems there are other versions lurking on my system.

I’ll spare you all the drudgery of diagnosing this issue and coming up with a solution.  I will tip my hat to Oliver Dowling and his blog post which eventually lead me to the solution below.  One thing I did learn is that creating a symbolic link for libserver.dylib did not work for me.  I had to instead create a hard link.  I suspect the JRE stub FreeplanJavaApplicationStubJavaVM binary was checking for the existence of a ‘normal’ file rather than ‘a’ file.

Anyway, here are the goods to make Docear work on OSX10.11 (and hopefully other versions).  Be sure to substitute the Java version numbers in the file paths with your version of Java installed.

$ cd /Library/Java/JavaVirtualMachines/jdk1.7.0_60.jdk/Contents/Home
$ sudo ln -s jre/lib bundle
$ cd bundle
$ sudo mkdir Libraries
$ sudo ln /Library/Java/JavaVirtualMachines/jdk1.7.0_60.jdk/Contents/Home/jre/lib/server/libjvm.dylib libserver.dylib

Hopefully after executing these commands, you will be able to use Docear.  Good luck!

Meeting in the Middle: How to Manage Change in Universities

This blog post is rather small, but is quite significant, to me at least.

I’ve been sent the following paper.

Leadership is a critical element in change management in universities and can be viewed alongside management as distinct but complementary elements in the change process (Ramsden, 1998). Leadership, in Ramsden’s view, is about movement and change and has a long and rich history. It refers to individuals or small groups, is largely independent of positions, and relies on the skills of individuals, not formal power relationships.

I always considered management and leadership as synomymous.  However, this idea from Ramsden has really shifted my thinking.  I never really considered myself as a leader, certainly in the context of my role at my institution.  But in combination with my team colleagues, that is what we have become.  So how does one lead from a position of no authority?

On the other hand, management is about ‘doing things right’ and is undertaken by people in formal positions responsible for planning, organizing, staffing, and budgeting. It is a relatively recent concept generated within the contemporary bureaucracy.

‘Doing things right’ just makes me grin. I’d opt for ‘doing things well’ which recognises that there is more than one way to approach things, and there really is no silver bullet in environments of complexity, such as higher education.

Nevertheless, this distinction between leadership and management is quite fascinating.  Rick et. al. continue:

In a similar vein, Kotter (1990) distinguishes between leaders who set direction, align people and groups, and motivate and inspire to create change, and managers who plan and budget, organize and staff, control, and solve problems in order to create order. To many staff, universities have sacrificed leadership in adopting a managerial approach to teaching and learning. In the top-down approach to change management, the leaders are senior management, using their management positions to drive change through organizational policies and restructures.

This is my experience and accepted practice – only leadership can come from someone with a position of authority.  But apparently, this isn’t true.

In the bottom-up approach, leadership comes from individual staff who are personally inspired to make changes and to inspire others to follow their lead.

On reflection, this is something that my colleagues and I have done.  Not intentionally, at least in the beginning, but our working and collaborating with academics at the coalface has generated somewhat of a following.

In the middle-out approach that we have observed at Murdoch University, middle managers became leaders and, through a combination of personal inspiration and policy based on emergent practice, have changed the university environment sufficiently to force both high level policy change and change in practice among teaching staff. Leadership in the middle-out approach is exhibited through problem solving and facilitation – that is, getting the job done and simplifying tasks required of those at the chalkface.

This is quite interesting as it does differ somewhat from my experience.  Certainly, there has been leadership from middle management.  In fact, much of the institutionally impactful work I have been involved in was only possible through the leadership of middle management.  That arose through their insights into what we were doing, and the value it was offering.  They supported us by championing our work at higher levels and attempting to create a facilitative environment for us to scale the work we were doing.  This is where the bottom-up approach often fails – without middle management, it is very difficult to make meaningful contributions beyond small coalface groups.  This is by the very nature of the entrenched SET mindsets of higher education institutions.

The key point in terms of my experience is that the middle management take the lead from the coal-face.  Without the bottom-up initiation, I’m not sure the middle management are any the wiser – they traditionally are still too far removed.  Effective middle management are able to see meaningful contributions made bottom up, and look for opportunities to scale.  Of course, this buts up against the Reusability Paradox – the more you attempt to broaden reach, the less effective it will become.  In this way too, bottom up initiatives can lead, and scale to the levels that make sense.



The reusability paradox – WTF?


The reusability paradox.  How can reusability be bad?

When first presented with this concept last year, I must admit I really did struggle with it.  As a techhie, every fibre of my being compels me to focus on reuse.  Hence, the paradox.  After some weeks of struggling with the reusability paradox, it did start to make some sense, emphasis on some’.

I have recently revisited this concept, both in discussion with my (to be) PhD supervisor, but also in my day-to-day work as an Educational Developer/Lecturer/Educational Technologist.  My revisit has prompted this blog post as a way of recording some connections I have made to real-world examples of this phenomenon, and how this impacts my thinking about technology (re)use.  This thinking is far from crystalised.

David Wiley explains the reusability paradox in the context of reusable learning objects, and more broadly, the open content movement.  When this concept was initially presented to me, it was already positioned in terms of technology.  I find it easier to start with the original context in learning design.

What is the reusability paradox?

David explains it quite succinctly as:

A content module’s stand-alone pedagogical effectiveness is inversely proportional to its reusability.

He explains that the more contextualised a learning object is made, the more meaningful it becomes to that context.  However, it also means the learning object becomes less reusable to other contexts.  We have a trade-off situation – effectiveness (in learning) vs. efficiency (in scalability). David concludes:

It turns out that reusability and pedagogical effectiveness are completely orthogonal to each other. Therefore, pedagogical effectiveness and potential for reuse are completely at odds with one another, unless the end user is permitted to edit the learning object. The application of an open license to a learning object resolves the paradox.

I don’t think an open licence alone will resolve the paradox, but that is a discussion for another post.

The reusability paradox in the wild

So enough of abstract concepts – how does the reusability paradox play out in the wild and in other ways besides learning objects?

“I see dead people the reusability paradox.”

I often see the reusability paradox when working with lecturers – conceptually the same as David Wiley explains, but at a higher level.  My particular experience relates to the contention of reusing units of study between different awards/degrees.  This is pretty typical in the STEM areas – in my institution we refer to them as service courses (units).  I work with a science school, and a key foundation unit of study taught from the school is anatomy and physiology.  There would be a dozen or more degrees that require students to have a sound knowledge in this area.

Conventional management wisdom seeks to reuse anatomy and physiology units for health related-degrees.  This is efficient use of resources, right?  And “why re-invent the wheel?”

But before I explore those questions, let’s first take a step back for a moment.

The key criteria for reuse is applicability to other contexts.  If there is sufficient overlap or congruence with another context, then a reusability factor could be considered high, thus worthy of reuse.  Learning is very contextual, particularly when you factor, as David does, the underpinning of constructivist learning theory.  Learners construct new knowledge, upon their own existing knowledge.  This is very individualised, and based on each learner’s past experiences, and ways of thinking.

Learning designers have some tricks to help deal with such diversity, such as researching your cohort, conducting a needs analysis, and ultimately categorising learners and focusing on the majority.  Clearly, this is flawed – but this is how massification of education works.  For instance, if you are preparing a unit of study for nursing students, then you can make some reasonable assumptions about those students motivations (i.e. they want to become a nurse); their prior formal learning (i.e. previous units studied within a structured nursing curriculum); and even down to smaller groups such as pathways to study (i.e. were they enrolled nurses – ENs or school-leavers). These assumptions of course aren’t always correct.  Nevertheless, the key point is that this unit of study is reused by all nursing students studying for the Bachelor of Nursing degree.  A more or less reasonable trade-off between effectiveness and efficiency.

So let’s return to the example of an anatomy and physiology unit of study.  In this instance, we see different discipline areas, albeit health related, attempting to reuse a unit of study.  Despite all being health related, a paramedic student’s needs aren’t the same as physiotherapy students’, or medical science students’.  And while some disciplines hail from within the same school, others disciplines are situated elsewhere within the organisational structure.  Now, consider the diversity of the cohort.

So to cope with this type of diversity, I typically see three approaches:

  1. Make the unit of study as abstract (decontextualised) as possible making no assumptions about learners or their backgrounds, and “teach the facts”.
  2. Design the unit to cope with the highest represented context (i.e. the discipline with the most students).
  3. Design the unit of study to address multiple contexts, in an attempt to make it meaningful to multiple disciplinary groups.

In other words, make it meaningful for no-one; make it meaningful to the biggest group, and nobody else; or, try to make it meaningful for everyone.

Approach 1 is obviously ineffective, especially considering constructivist thinking.  You end up with students asking “why do I need to know this?”, or “that course was so dry and boring.”

Approach 2 while not quite as flawed as approach 1, can be less than ideal.  Particularly when the highest represented group is small compared to the entire group.  In such cases, the other groups feel marginalised, “I want to be a Paramed, not a Physiotherapist.”

Approach 3 can also ineffective because you can end up with a study unit that is incredibly complex.  This group of students does this, that group does that.  As the lecturer, you have to manage the mixture.  The students too can become confused about requirements. You can also run into “equity” type policy constraints, such as “all students must do the same assessments.”  This is an important point.  If you end up with such complexity, you really have to ask the question, “why not just have separate units of study?”

But solving this challenge isn’t the focus of my blog post.

The Reusability Paradox as it Applies to Education Technology

So does the concept translate to technology?  Yes it does!  And similar issues arise as a result.

Recall the three approaches I see people use to deal with the challenges of reuse for multiple contexts?

  1. Make abstract
  2. Contextualise for the largest group
  3. Contextualise in multiple ways for multiple groups

Let’s consider Approach 1

David Wiley says of the reusability paradox:

The purpose of learning objects and their reality seem to be at odds with one another. On the one hand, the smaller designers create their learning objects, the more reusable those objects will be. On the other hand, the smaller learning objects are, the more likely it is that only humans will be able to assemble them into meaningful instruction.

I think this statement has some “translatability” to an education technology context as:

On the one hand, the smaller developers create their learning technology tools (e.g. programming libraries rather than complete systems), the more reusable those tools will be. On the other hand, the smaller learning technology tools are, the more likely it is that only developers (and not designers) will be able to assemble them into functional learning technologies.

David Wiley also says:

To make learning objects maximally reusable, learning objects should contain as little context as possible.

To remove context is to make something more abstract – to take away intrinsic meaning or specific function.  Indeed this makes things more reusable, it also requires re-contextualisation.  In the context of technology, abstraction leads to dependence on the developer.

Let’s skip to approach 3.  With this approach, we end up with technology that attempts to do everything for everyone.  These technologies become so complicated to use, that people simply don’t use them.  My favourite example of approach 3 is the Moodle Workshop activity which is a “powerful peer assessment activity”.  I consider myself to have a reasonable grasp of technology, and yet after 45 minutes of tinkering with the workshop activity in Moodle, I gave up.  I have only seen 1 person at my institution use it.  It has so many options, too many options, because it tries to account for all the different ways one might attempt to embed peer assessment into their course.

So what about approach 2?  We reuse a learning technology without change – meaning it is focused on the majority of requirements (however that might be determined).  This is typical of COTS (commercial off the shelf) solutions.  This inevitably leads to functional gaps – “the system does this, but I want to do that.”  If the gap is substantial, it can lead to workarounds.

Does technology need to be reusable?

This is where I struggled last year with the reusability paradox.  If you can’t reuse a technology, then isn’t that a serious limitation?  Management are constantly looking to replicate successes – “This worked so well, so how can we use this in other areas?”

When I am creating/adapting/augmenting technology for others, I have to demonstrate “bang for buck” in terms of my time invested.  Does what I create, at least have to pay for itself in affordances?  I normally look for economies of scale, and the obvious way is through reuse – it is usable by X number of people.  Management/decision-makers get this – easy. However, technology can offer other economies.  For instance, depending on the technology, it may instead allow a specific group of people to do something much better, quicker, cheaper, or if its very innovative, something they couldn’t do before.  But that something might be very specific, so specific that it isn’t very reusable, and limited to a small audience.  Yet, if it still yields a net gain, is that bad?

What if a technology is so specific, it’s designed for just one person – yourself?


At some stage in our lives, we have all had to engage with some form of workaround to get from A to B.  Not just in terms of technology but life in general.

If you create a workaround, does it need to be reusable?  Perhaps not.  But what if you want it to be?  How can you go about it?

This is where my time (and thinking) ends for now.


Moodle Activity Viewer – in the cloud?

So what is MAV?  An introduction to MAV written in 2013 is available.  Features have been added, but the core concept remains unchanged.

In a nutshell, it allows you to visualise student click activity within your Moodle course site using a heat map, colouring links lighter or darker according to the number of times they have been accessed.

A very early version of a home page has also been established for MAV.  This will expand in the weeks to come.

The computer source code is also available for the enterprise version, available for download from github.  This version must be installed by your IT folk onto one of their servers for you to use it on your Moodle at your institution.

So is there a ‘personal’ version available?  What would that look like?

Could anyone using Moodle (and Firefox for now) use it, without requiring your IT department to install a plug-in for your Moodle server?

Would you be interested in using MAV this way?

What are the ethical challenges to overcome?

I’d love to hear your thoughts/suggestions in the comments below.

Being BAD at task management

I have co-authored a paper with a colleague, David Jones which was published at the ASCILITE2014 conference being held in Dunedin New Zealand.  The paper was titled Breaking BAD to bridge the reality/rhetoric chasm.  The reality/rhetoric chasm is best expressed through the following metaphor, in the words of Professor Mark Brown:

E-learning’s a bit like teenage sex.  Everyone says they’re doing it but not many people really are and those that are doing it are doing it very poorly. (Laxon, 2013, n.p).

A central tenet of the paper is the following argument about this chasm:

Our argument is that the set of implicit assumptions that underpin the practice of institutional e-learning within universities (which we’ll summarise under the acronym SET) leads to a digital and material environment that contributes significantly to the reality/rhetoric chasm. The argument is that while this mindset underpins how universities go about the task of institutional e-learning, they won’t be able to bridge the chasm.

Instead, we argue that another mindset needs to play a larger role in institutional practice. How much we don’t know. We’ll summarise this mindset under the acronym “BAD”.

A comparison of SET and BAD is provided in the following table:

Component SET  BAD
What work gets done? Strategy – following a global plan intended to achieve a pre-identified desired future state Bricolage – local piecemeal action responding to emerging contingencies
How ICT is perceived? Established – ICT is a hard technology and cannot be changed. People and their practices must be modified to fit the fixed functionality of the technology. Affordances – ICT is a soft technology that can be modified to meet the needs of its users, their context, and what they would like to achieve.
How you see the world? Tree-like – the world is relatively stable and predictable. It can be understood through logical decomposition into a hierarchy of distinct black boxes. Distributed – the world is complex, dynamic, and consists of interdependent assemblages of diverse actors (human and not) connected via complex networks.

The paper uses the establishment of the Moodle Activity Viewer (MAV) at my institution as an example of using BAD principles to improve e-learning.  However, this is not the focus of this blog post.  As a means of improving my own conceptions of BAD, SET and their interplay, I have begun reflecting on how I have unwittingly applied BAD principles to my other endeavours.  A recent example relates to my use of task management software which is detailed in a recent post, but for which I’ll summarise here for brevity.

I recently switched to a new task management system called Omnifocus.  Omnifocus provides the ability to select what it calls perspectives to show your tasks in different ways according to your workflows and context.  One such perspective new to their recently released OSX version is called the Forecast Perspective.  This perspective for the coming days, shows what tasks are due to be started (deferred to a later date when entered) and what tasks are due to be completed.  This information is then augmented with appointments found in your OSX calendar application.  Its a lovely way to see what you need to do, along side what existing time-based commitments you have to help plan to get things done.  But there was a problem.  Any deferred tasks that were not completed on the date they were deferred to, would not shift to the next day in the Forecast perspective.  Instead, they simply disappear from the perspective entirely until their due date.  Through an online search to see if I had mis-configured my database or if anyone else was as baffled as I was, I came across the following entry on the Omnifocus discussion boards:

I’m going to submit this as a feature as well, but I figure I’ll post it here to see whether it can get more traction. My issue is this:

If I have Deferred something to a start date in the future, odds are I probably think it’s pretty important that it starts on that day.

However, what happens if that day comes and goes and I didn’t start the item? In Forecast, the item disappears. That doesn’t make any sense to me. Forecast shouldn’t only be showing me the “past due” things I assigned dates to, but things I didn’t touch that I was supposed to.

Seems I was not alone in my frustration.  The forum continued with discussion of various work-arounds, none of which I found particularly suitable to my context.  So I sought other possibilities to resolve my problem.

Omnifocus makes use of the OSX Applescript frameworks which provide a high-level scripting language that can be used to customise behaviour and automate tasks to make things work in ways not originally conceived by software creators.  A handful of applescript contributors exist that have created some very useful Applescripts for Omnifocus.  One such contributor Curt Clifton has created a script that identifies projects where there is no next action to perform, suggesting that the project may have stalled.

The significance of this script is that I have been able to adapt it to solve my problem with deferred tasks disappearing from the Forecast view when they are not completed on the defer date.

Returning to the principles of BAD, there are some alignments undertaken by the Omnigroup company to allow customers to ‘break bad’ by customising their Omnifocus product to yield to their ends.  In the absence of the Applescript integration in Omnifocus, there would be little hope other than waiting for Omnigroup to implement a feature to address the limitation.

The integration of the Applescript framework into Omnifocus allows Bricolage to occur.  It offers Affordances such that I and others are able to solve problems locally and contextually according to our own specific needs and wants.  The creation and use of this bricolage is distributed – Omnifocus do not have direct control or management over the extensions that can be applied.  Things can be developed and/or shared in a Distributed fashion according to the needs of individuals.

While the use of the Applescript framework won’t solve everyone’s issues and challenges, like many products that do integrate an Applescript Dictionary, it does shift away from the traditional SET mindset of software development.

How to Install Backintime Backup Software on CentOS/RHEL/Scientific Linux 7


Hopefully the title of this post is explanation enough.

Backintime is a neat backup solution that mimics in some ways, the abilities of OSX’ Timemachine backup system.  At predetermined time intervals, Backintime will sweep configured directories on your computer, and only backup the differences since the last sweep.  It results in an efficient use of storage resources on your backup destination drive by hard linking identical files.  Unlike the OSX Timemachine solution, it does not hard link directories, as only Apple’s HFS+ filesystem supports hard-linked directories.  This makes Backintime portable to Unix-based Operating Systems using a variety of filesystems.

The reason for this post is to document some trickery necessary to make Backintime work on our favourite North American Linux vendor’s enterprise operating system, the derivative on which that I use being CentOS 7.  CentOS 7 like its other cousins in the Red Hat family only includes Python interpreter version 2.  While Backintime is written in Python 3.  I have had only a little experience with Python, and it does have some great features (built in multi-threading), however making the two versions incompatible is such a nuisance.  Whinge over, the trick to allowing Backintime to work on CentOS 7 is to install a 3rd party Python 3 interpreter.  The following instructions achieve this end.


Download Backintime from their website, and follow the installation instructions from the README file.

Next, you will need to install a 3rd-party YUM Repository called

yum localinstall
yum install python33

Then create a script file to execute the python3 interpreter with the command python3 by creating the following shell script as the file /usr/local/bin/python3

scl enable python33 -- python $@

Then compile and install Backintime from the downloaded

./configure --python3 && make && make install

Omnifocus2 – A tale of the disappearing deferred tasks

Sadly due to work commitments, I have neglected my blog over the past 2 years.  I plan to reinvigorate things again, starting with this post that I wrote back in August 2014, but never actually pressed the ‘Publish’ button.

I have recently switched from my existing Mac-based task management software Things, to a new product called Omnifocus.  Without getting into the details that have lead to this switch, suffice it to say that Things development moves at a slower pace than I am satisfied with.  I have been trialling Omnifocus for about a month, and while it does have some extra features over Things, it of course has limitations.

Understandably, creating task management software that will yield to the demands of a diverse user base is nigh impossible.  Omnifocus is no exception. One demand of the software that I have is to be able to better plan/schedule my projects and tasks into a calendar.  Omnifocus includes what is called a Forecast Perspective on your projects and tasks.  It is a way to see in a somewhat calendar type view when tasks are scheduled to begin (deferred), and when they are due, along-side my daily appointments as gleened from my Mac OSX Calendar.

This I thought was wonderful as I could then plan out my days and weeks according to what was in my calendar in terms of appointments, and what tasks I had to complete.  But then a problem arose that was unforeseen, and that afflicted more than just myself as explained in this forum post excerpt on the Omnigroup discussion forums:

I’m going to submit this as a feature as well, but I figure I’ll post it here to see whether it can get more traction. My issue is this: If I have Deferred something to a start date in the future, odds are I probably think it’s pretty important that it starts on that day. However, what happens if that day comes and goes and I didn’t start the item? In Forecast, the item disappears. That doesn’t make any sense to me. Forecast shouldn’t only be showing me the “past due” things I assigned dates to, but things I didn’t touch that I was supposed to.

If a deferred task is not done on the day it was deferred to, the following day it would simply disappear from the forecast view altogether.  Not very helpful when forecasting. The discussion thread continues with contributors arguing the merits of changing the software to overcome this limitation, along with a swathe of work-arounds, none of which I found particularly satisfactory to my needs and context. This got me thinking… what if each morning, there was a way that I could select all tasks that were incomplete and had a defer date earlier than today?  I could then update the defer date on all these tasks to be today, and voila, they reappear in my Forecast Perspective.

But how would I do this? Omnifocus makes use of the OSX Applescript frameworks which provide a high-level scripting language that can be used to customise behaviour and automate tasks to make things work in ways not originally conceived by software creators.  A handful of applescript contributors exist that have created some very useful Applescripts for Omnifocus.  One such contributor Curt Clifton has created a script that identifies projects where there is no next action to perform, suggesting that the project may have stalled. The significance of this script is that I have been able to adapt it to solve my problem with deferred tasks disappearing from the Forecast view when they are not completed on the defer date.

If you are frustrated with this limitation of Omnifocus, and you don’t mind losing your originally specified defer date for your tasks, then my DeferOldToNow Applescript might be of use to you.  What the script does is, recursively traverse all of your active projects, looking for incomplete tasks that have a defer date that is older than the present date.  On finding each task, it then sets the defer date to the present. If you are interested in the Applescript code, or would like to contribute updates, the source is available on github.

When the script completes, visiting your Forecast Perspective will once again reveal all tasks that were past their deferred date, and incomplete.  They appear under “today”.  Once they reappear here, you can then defer them again to some time in the future if you need to, but at least they are visible and you are reminded that they haven’t been done. I hope this Applescript proves useful to other users of Omnifocus.  If it does, give me a shout out in the comments section.  Happy Omnifocussing!