Research on Micro Learning vs Long Form


In micro learning, not only is the content short and compelling but the quizzes come fast and full of cognitive engagement.

Distributing vs. Blocking Learning Questions in a Web-Based Learning Environment by Felix Kapp, Antje Proske, Susanne Narciss and Hermann Körndle, Journal of Educational Computing Research January 1, 2015 

Why Microlearning Drives Over 20% More Information Retention Than Long-Form Training by Lenny DeFranco, 2015

Researchers at Germany’s Dresden University of Technology studied the efficacy of web-based micro learning by examining learning outcomes across three different quiz approaches for a course on intellectual property law. As described by reviewer Lenny DeFranco:

To do this, the researchers divided a 16-chapter online text into three sizes. The “fine-grained” group of learners answered a learning question after reading every single chapter; the “medium-grained” group answered four questions after reading four chapters; and “blocked” learners got eight questions after each half of the digital text. After all of this, the participants all took the same multiple choice test covering the whole lesson.

Out of a total population of 61 students, the simple outcome is that the fine-grained students who took a quiz directly after each chapter did better than the blocked students who had four or eight chapters to consume before each quiz. The study measured many factors like demographics, interest-level and both page and quiz reading times. An achievement test at the end using new questions also found that the fine-grained test takers did better.

But in addition to varying the amount of content before each quiz, the researchers also constructed multiple choice “learning questions” for each quiz that were carefully designed for maximum content processing. The researchers’ example

A learning question within the web-LE on “Basics of Intellectual Property” could sound as follows: “Which of the following activities does NOT prevent an invention from getting a patent? a) a publication of the invention in a scientific journal, b) a presentation of the invention on a scientific conference, c) a presentation of the invention to a cooperation partner, d) selling of a prototype of the invention.

More than just a true-or-false test of knowledge acquisition, the learning question “support the knowledge construction by helping learners with the integration and organization of content and by providing necessary information for an assessment of their own learning progress.”

The researchers compare their results to pre-digital learning tests that found a much weaker link between fine-grained and blocked quizzes, and they point to areas of additional research needed to evaluate the differences between e-learning and conventional study.

Jack Powers



2 thoughts on “Research on Micro Learning vs Long Form”

  1. Hi Jack,
    I am deeply interested in assessment tools, and have particularly worked around MCQs and ways to improve them. One way we mentioned the other class is by including confidence degrees, or degrees of certainty (Leclercq, D.) in each item of the test. Basically, the student has to indicate, quickly after each response, how sure she was about the exactitude of the given answer. This article by Belgian researchers Leclercq and Poumay (2006) delves into this idea.

    Here in the US, Case and Swanson (e.g., 2001) have written some of the most consulted works about construction of MCQ items.

  2. Hi, Jack. I’m sure you didn’t intentionally overstate the results of the knowledge test part B. Summarizing the results of the test that came immediately after the learning session, you stated: “An achievement test at the end using new questions also found that the fine-grained test takers did better.”

    However, I take issue with this. The paper explicitly notes that “No differences were found in the knowledge test part B (F < 1)." And really, which would learners or educators prefer: better immediate or better delayed recall? Which is a truer measure of learning?

    You're not alone in interpreting a minor difference in statistical averages—across the fine-, medium-grain, and blocked treatments—as evidential, causal proof. Grovo, a company that specializes in microlearning (apparently) similarly misinterprets the study's results, claiming "short content drives 20% more information retention."

    Really, the emphasis on the content size is misdirected. It's how the user is forced to interact with the content—the authors Kapp et. al allude to this.

    But my quibble really is the omission or obfuscation of the important fact: post-learning achievement was constant across treatment groups. The fine-grain learners could not recall more than the blocked-learners. And they certainly did not perform 20% better on a post-learning assessment. Spaced (or distributed practice), while powerful, requires more time and repetition to show its true potential.



Leave a Reply