Feeds:
Posts
Comments

Posts Tagged ‘Instructional Design’

Does an approach to instructional design and technical communication that minimizes the amount of content vs. the usefulness of the content always work better than a more comprehensive or “systems” approach?

Well, it depends.  It depends on the user, the medium and/or modes of delivery and, in some cases, who is authoring the content/instruction.  Generally, a minimalist approach to instructional design and end user documentation focuses on the novice user of a technology or tool.  Complex task domains and power users may not be the ideal audience for a minimalist instructional design and documentations approach. 

Expert users often require richer, more scenario driven content than a minimalist approach can (or should provide).  Understanding the level of task that the audience needs to have represented is the key to documentation/instructions for expert users.

For instance, training and/or documentation on how to use a new feature of a surgical device may require more than the step action, task based approach commonly used in minimalist systems documentation.  The surgeon needs to know the context in which the device should be used, what a “successful” use of the device looks likes, and, most importantly, what the consequences of an error might be and how to recover from it.

Since their goals are well defined expert users want to spend little time reading procedural information and more time working with the software (or other tools). A key minimalist guideline is: allow experts to avoid excessive reading.  Providing directions they can pursue rather than step-by-step instructions takes advantage of their interest in exploratory learning. 

Instructional designers and technical writers tread a very fine line in looking for the right approach, that will enable the best performance for the right user at the right time.  Guidelines for the development of instruction for novice users are numerous – but few and far between for the development of instruction for expert users. Barbara Mirel (User experience and Usability lead at University of Michigan National Center for Integrative Biomedical Informatics) lists five themes that help lead instruction for complex tasks away from the conventional:

  • Develop rich scenarios about activity in context rather than narrow scenarios about unit tasks.
  • Build interactivity into instruction instead of presenting, for example, view only semantic maps and graphic browsers.
  • Provide multiple cases that are thematically linked and not just single cases (elaborated examples).
  • Bring misconceptions to the surface and examine them as part of instructing users in detecting, diagnosing, and recovering from errors.
  • Develop multiple analogies, metaphors, and examples that mutually support a single point or purpose and not merely one analogy, metaphor or example per point.
Advertisements

Read Full Post »

Have you ever read a user manual or training manual cover to cover?  Very few users of technology manuals or any instructional artifact read from start to finish or  follow a linear step-by-step reading process through a document.

Human computer interaction (HCI) and technical communication research has consistently shown that users will hunt and gather information as they go – rather than consistently work through supporting materials in a linear fashion.  Still, most user manuals and software training continues to consistently follow a “systems” approach where every feature and function is documented – whether anyone will actually use it or not.

As technical writers, instructional designers, and digital designers we can help users more if we provide them with less.  How? I advocate a minimalist approach of design and instruction that is based on the notion that users need useful, but not comprehensive information to learn.

First articulated by former IBM researcher John Carroll, the principles of minimalism were first developed to help novice users learn how to get to competency faster:

“Our strategy in developing training designs was to accommodate, indeed to try to capitalize on, manifest learning styles, strategies, and goals…we became committed to minimizing the obtrusiveness to the learner of the training material –hence the term minimalist.” (Carroll, 1990, p. 7)

Three key aspects of the minimalist instructional approach are:

  1. Allow learners to start immediately on meaningfully realistic tasks
  2. Reduce the amount of reading time and other passive activity in training
  3. Help to make errors and error recovery less traumatic and more pedagogically productive

Carroll’s research (along with that of Janice Redish and Jo Ann Hackos) has determined that users are “reading-to-learn-to do” and want immediate opportunities to act-not reading about how to manipulate the tools that will get them there.  Designing usable content requires a constant attempt to balance the learner’s desire for knowledge with the learner’s desire to accomplish the task at hand.  The priority in designing minimalist instruction is to invite users to act and to support their action.

How do practitioners make this active learning approach work in their designs?

To design minimally we need to know the maximum about our users: 

  • Are they novices, intermediate or expert users? 
  • Do they have any preconceived notions about the tasks or outcomes of those tasks?
  • What previous experience do they bring to the tool, interface or the instruction?
  • What can we determine about the users’ motivation for using the technology and taking the training or reading the documentation? 
  • What errors are users likely to make in the use of a tool or process? 
  • How can the designer best help them quickly recover from an error and learn from that mistake to become a “better” user?

A minimalist approach requires a significant investment of designer/writer input and time in the development process, a motivation (and the freedom) to move beyond standard audience analysis techniques, and a willingness to advocate for instructional materials that are more useful than they are “complete”.  Practitioners often run into resistance to a technique that calls for giving users incomplete information, documenting real tasks versus documenting system features, and presents tough choices about how and when to integrate comprehensive documentation with other kinds of support.

Next post: Is a minimalist approach to technology instruction always the right approach?

Read Full Post »

As I continue to complete a variety of different eLearning projects, I often look back and think about how successful these projects were. My tendency is to focus on issues like:

  • Did the client want to strangle me during the project or at the end, did I get a group hug?
  • Did we conduct a few more (e.g., 14) reviews than planned (e.g., 3)?
  • Once completed, did my manager use the term “red” when describing the project’s margin?
  • How many team members had to take PTO for PTSD as a result of the stress?
  • Once completed, did the client’s e-mail to me include words like, “wonderful, effective, highly-interactive, etc.” to describe the course or words like, “boring, confusing, problematic”?

All of the comments above beg the question, “How do you measure success on an eLearning project?” And second, does the learner matter? Obviously, the latter is meant to be a rhetorical question to which you should respond, “Duh! Of course the learner matters.” However, my bulleted questions above point out a tendency that occurs often – that is, to measure success of the project and not of the learning that takes place as a result of the project.

I’d take a wild guess that most companies can show you a table that displays the margins they have achieved on various projects. But how many even track quantified measures of learning that occurred as a result of those projects and share that with everyone? My guess is not many.

In an article written back in 2003 by the eLearning Guild (http://www.elearningguild.com/pdf/2/110303mgt-h%5B1%5D.pdf), the author states that it’s not how often you train, it’s how well. To measure “how well,” he suggests using Kirkpatrick’s Four Levels of Learning Evaluation, which includes:

  1. Level One – Reaction. Did they like it?
  2. Level Two – Learning. Did they learn?
  3. Level Three – Behavior. Did they use it?
  4. Level Four – Results. Did it impact the bottom line?

As I think back at all of the eLearning projects I’ve managed, I’m fairly sure I would not give myself an “A” for relying on Kirkpatrick’s levels when measuring the overall success of the project. But in order to conclude the projects had been successful, I believe I should have.

Now, let me return to the implications of the title for this post. How should success be measured? And does the learner matter? Better yet, how do you currently measure success on your projects? And in those measurements, did the learner matter?

Read Full Post »

I was part of a discussion not too long ago where a group was trying to decide what would be the next course to develop in a curriculum series.  It was a great conversation with equal representation from the learning world and the subject matter experts who were going to be directly impacted by this training program.   Both sides were very passionate about the issues as they saw them.   We were limited on time and certainly weren’t lacking ideas or problems to resolve; what we were lacking was a way to get at all the information we needed in a very short time (2 hours) and a common ground to work from that wasn’t too learning-centric.

Here’s what we did, in order to put some boundaries around the conversation and to make the best use of everyone’s time we flipcharted a t-account and labeled one side Current State and the other side Future State.  (We had done a similar thing in an activity in a leadership class we designed) The goal was to take all of the conversation points and move them into one category or the other.   We started rewriting the notes and comments into this format and soon realized that the Current State side was overloaded.  In hindsight, I think this what typically happens.  We begin the Front-end Analysis focusing on why a client needs a learning solution or needs to make an existing one better. What really made the lightbulbs go off for the group after they saw the heavily burdened Current State side was that we were all in that room to find a solution, which meant equal time needed to be spent focusing on Future State needs and desires.  Once we had the visual “t-account” in place I noticed that the rest of the Current State issues that were discussed were followed by a Future State statement.  We had a process in place, we all saw structure and value, and we also all saw a clear path for next steps. 

So to bring this around to Instructional Design-speak, we identified the current state, started the process of defining the future state and then were able to see the “gap”.  The gap is where the learning needs to be focused.  This is the place where we focus on building the skills and abilities that are going to get us from here to there.  Can this kind of informal gap analysis be done in every learning solution design process?  Is it always necessary?  I might argue “yes” to both questions based on the outcomes we experienced in this particular situation.

Read Full Post »