Week 13 — E-Learning Systems & Educational Technology¶
Course Objective: CO8 — Design and evaluate technology-mediated learning environments, applying instructional design principles and analyzing learning outcomes through data-driven frameworks.
Learning Objectives¶
By the end of this week, you should be able to:
- [x] Trace the historical evolution of e-learning from CBT through AI-adaptive systems
- [x] Compare major LMS platforms (Canvas, Blackboard, Moodle, D2L) on architecture, features, and market position
- [x] Explain SCORM and xAPI standards and articulate why content interoperability matters
- [x] Apply the ADDIE instructional design model to plan a digital learning experience
- [x] Evaluate MOOC business models and the factors affecting completion rates
- [x] Design gamification elements appropriate for corporate or academic e-learning contexts
- [x] Audit an e-learning course against Section 508 / WCAG accessibility requirements
- [x] Interpret learning analytics dashboards and identify at-risk students using early alert indicators
1. E-Learning Evolution: CBT to AI-Adaptive Systems¶
1.1 The Historical Arc¶
E-learning did not emerge fully formed with the internet — it evolved over five decades, each era building on the technological and pedagogical innovations of the previous one.
Technology: Mainframes, then personal computers (Apple II, IBM PC)
Characteristics: - Self-paced tutorials running locally on a computer (no network required) - Branching logic: quiz answers determined the next content screen shown - Early AI experiments: PLATO system (1960) at University of Illinois pioneered touchscreen interfaces, instant messaging, multiplayer games, and online forums before the internet existed - Floppy disk or CD-ROM delivery; updating content required physical media replacement
Limitation: Each CBT course was a closed silo; no interoperability, no central tracking, no social learning
Technology: World Wide Web, Netscape, Flash, Java applets
Characteristics: - Courses delivered through a web browser; internet distribution replaced physical media - HTML-based content with embedded multimedia (Flash animations were ubiquitous) - Email and early discussion boards enabled limited asynchronous interaction - First LMS products emerged: WebCT (1995), Blackboard (1997) - SCORM standard introduced (1999) to solve the content interoperability problem
Limitation: Still largely page-turning content; limited learner analytics; bandwidth constrained rich media
Technology: Broadband internet, web 2.0, AJAX, video streaming
Characteristics: - LMS platforms became the institutional standard for course management - Rich multimedia: video lectures, interactive simulations, virtual labs - Discussion forums became core to pedagogical design - Gradebooks, assessment engines, and basic learning analytics - Mobile-friendly design began emerging late in the decade
Major platforms: Blackboard, Moodle (open source, 2002), Angel (later acquired by Blackboard), Sakai
Technology: HD video streaming, mobile-first design, social networks
Characteristics: - Massive Open Online Courses (MOOCs) emerged 2011-2012 (Coursera, edX, Udacity) - Peer-graded assessments enabled scale beyond what instructor grading could support - Social learning features: peer discussion at course scale (thousands of learners) - Microlearning: bite-sized modules suited to mobile consumption - xAPI (Tin Can API) introduced in 2013 as a flexible SCORM successor
Disruption: Sebastian Thrun's Stanford AI class (2011) attracted 160,000 students; launched Udacity; "MOOC revolution" declared (prematurely) to replace traditional education
Technology: Machine learning, NLP, pandemic-driven adoption acceleration
Characteristics: - AI-adaptive learning: Algorithms adjust content, pacing, and difficulty based on individual learner performance - COVID-19 forced mass migration to online learning in weeks, compressing a decade of adoption - Hybrid/HyFlex models: Simultaneous in-person and online participation - Generative AI tutors (GPT-based): personalized explanations, Socratic questioning, writing feedback - Learning analytics matured: predictive models identifying at-risk students early - Skills-based learning: competency mapping, digital credentials, micro-credentials
1.2 The Pendulum of Educational Philosophy¶
E-learning technology choices are never value-neutral — they embed particular educational philosophies:
| Philosophy | Core Belief | E-Learning Manifestation |
|---|---|---|
| Behaviorism (Skinner) | Learning = behavior change through stimulus-response-reinforcement | CBT branching logic, drill-and-practice exercises, immediate feedback |
| Cognitivism (Bloom) | Learning = information processing and schema building | Organized content hierarchy, worked examples, Bloom's taxonomy-aligned assessments |
| Constructivism (Piaget/Vygotsky) | Learning = actively constructing knowledge through experience | Project-based learning, simulations, collaborative discussion boards |
| Connectivism (Siemens) | Learning = maintaining connections in a networked knowledge ecosystem | MOOCs, social learning networks, curation tools |
2. Learning Management Systems¶
2.1 LMS Architecture¶
A Learning Management System is a software platform that manages, delivers, and tracks educational content and learner progress. The core components of any LMS include:
┌───────────────────────────────────────────────────────────┐
│ PRESENTATION LAYER │
│ Student Browser │ Instructor Browser │ Mobile App │
└──────────────────────────┬────────────────────────────────┘
│
┌──────────────────────────▼────────────────────────────────┐
│ APPLICATION LAYER │
│ ┌──────────────┐ ┌─────────────┐ ┌──────────────────┐ │
│ │Course & Content│ │Assessment │ │Communication │ │
│ │Management │ │Engine │ │(Discussion, │ │
│ │ │ │ │ │ Announcements) │ │
│ └──────────────┘ └─────────────┘ └──────────────────┘ │
│ ┌──────────────┐ ┌─────────────┐ ┌──────────────────┐ │
│ │Gradebook & │ │Analytics & │ │Integration APIs │ │
│ │Reporting │ │Dashboards │ │(LTI, REST) │ │
│ └──────────────┘ └─────────────┘ └──────────────────┘ │
└──────────────────────────┬────────────────────────────────┘
│
┌──────────────────────────▼────────────────────────────────┐
│ DATA LAYER │
│ Relational DB (PostgreSQL/MySQL) │ File Store (S3) │
│ Video Platform (Kaltura/Panopto) │ Search Index │
└───────────────────────────────────────────────────────────┘
2.2 Canvas LMS¶
Canvas (Instructure), launched in 2011, has become the most widely adopted LMS in higher education in the United States. As of 2024, Canvas is used by over 4,000 educational institutions and approximately 30 million learners.
Canvas differentiating features: - SpeedGrader: Side-by-side submission review and rubric-based grading; supports video feedback recording - Outcomes and Mastery: Built-in competency tracking aligned to learning objectives - Canvas Studio (formerly Arc): Integrated video creation, hosting, and in-video quiz insertion - Commons: Shared repository of open educational resources that instructors can import and remix - Blueprint Courses: Master course template syncing to multiple child course sections - API-first architecture: Rich REST API enabling deep integration with other institutional systems
Canvas API example — fetching student submissions:
import requests
canvas_url = "https://your-institution.instructure.com"
api_token = "your_api_token"
headers = {"Authorization": f"Bearer {api_token}"}
# Get all submissions for a specific assignment
response = requests.get(
f"{canvas_url}/api/v1/courses/12345/assignments/67890/submissions",
headers=headers,
params={"include[]": ["user", "rubric_assessment"], "per_page": 50}
)
submissions = response.json()
for submission in submissions:
print(f"Student: {submission['user']['name']}, Score: {submission['score']}")
2.3 Blackboard Learn¶
Blackboard (now owned by Anthology following a 2021 merger with Anthology Inc.) was the dominant LMS from the early 2000s through the 2010s. Despite losing market share to Canvas and Moodle, it remains widely deployed, particularly in larger universities that made substantial investments in the platform.
Blackboard's evolution challenges: - Legacy architecture (originally built on early-2000s J2EE stack) created performance and usability problems - Blackboard Ultra: Complete UX rewrite launched 2016; still being adopted by institutions migrating from Classic - High licensing costs relative to open-source alternatives - Acquisition history (WebCT, ANGEL, Moodlerooms) created a fragmented product portfolio
2.4 Moodle¶
Moodle (Modular Object-Oriented Dynamic Learning Environment), created by Martin Dougiamas and first released in 2002, is the world's most widely deployed LMS by installation count — over 400 million users in 242 countries — primarily because it is free and open source.
Moodle's strengths: - Zero licensing cost (hosting and customization costs remain) - Highly extensible plugin architecture (1,900+ plugins in the directory) - Strong pedagogical philosophy (constructivism-influenced design) - Active global community; localized in 150+ languages - MoodleCloud: Hosted SaaS option for institutions without IT capacity
Moodle's challenges: - Default UI has historically been criticized for being visually dated - Customization requires PHP development expertise - Total cost of ownership (self-hosting, IT staff, customization) can exceed proprietary alternatives - Update cycle management complexity for highly customized installations
2.5 D2L Brightspace¶
Desire2Learn (D2L) Brightspace occupies a premium position in the LMS market, particularly strong in K-12 and corporate learning. D2L's differentiator is its Intelligent Agents feature and its early investment in learning analytics.
D2L distinctive capabilities: - Intelligent Agents: Rule-based automation that triggers actions when learner behavior patterns match conditions (e.g., "if student has not logged in for 5 days AND has <60% grade, send motivational email and notify advisor") - Learning Outcomes: Comprehensive competency framework aligned to curriculum standards - Performance+: Predictive analytics dashboard showing at-risk students with recommended interventions - Brightspace Creator+: Authoring tool for interactive content creation within the LMS
2.6 LMS Market Comparison¶
| Feature | Canvas | Blackboard Ultra | Moodle | D2L Brightspace |
|---|---|---|---|---|
| Pricing | ~$3-5/student/yr | High (custom) | Free (hosting costs) | ~$4-7/student/yr |
| Hosting | Cloud (SaaS) | Cloud or on-prem | Self-hosted or MoodleCloud | Cloud (SaaS) |
| API/LTI | Excellent REST API | Good | Excellent (plugin) | Good |
| Mobile App | Excellent (Student/Teacher) | Good | Good (Moodle app) | Good |
| Analytics | Good (Impact) | Good | Limited (native) | Excellent (Performance+) |
| Best for | Higher ed, K-12 | Large universities | Budget-conscious, global | Analytics-focused, K-12 |
| Market position | #1 US higher ed | Declining | #1 worldwide by installs | Growing |
3. Content Interoperability Standards¶
3.1 SCORM: The Foundation¶
SCORM (Sharable Content Object Reference Model) was developed by ADL (Advanced Distributed Learning) initiative of the U.S. Department of Defense, first released in 1999. SCORM solved a critical problem: e-learning content created for one LMS couldn't run on another.
SCORM defines: 1. Packaging: How to bundle course files into a ZIP archive with a imsmanifest.xml describing the content structure 2. Runtime communication: How a course (SCO — Sharable Content Object) communicates with the LMS via a JavaScript API 3. Sequencing and navigation: Rules for how learners progress through course components
SCORM runtime API — key data elements:
// SCORM 2004 API communication (simplified)
var API_1484_11 = window.parent.API_1484_11; // Get LMS API
// Initialize communication
API_1484_11.Initialize("");
// Record learner progress
API_1484_11.SetValue("cmi.completion_status", "completed");
API_1484_11.SetValue("cmi.success_status", "passed");
API_1484_11.SetValue("cmi.score.raw", "85");
API_1484_11.SetValue("cmi.score.min", "0");
API_1484_11.SetValue("cmi.score.max", "100");
API_1484_11.SetValue("cmi.score.scaled", "0.85");
// Set session time (ISO 8601 duration)
API_1484_11.SetValue("cmi.session_time", "PT1H23M45S");
// Save and terminate
API_1484_11.Commit("");
API_1484_11.Terminate("");
SCORM versions: - SCORM 1.1 (2000): Rarely used; superseded quickly - SCORM 1.2 (2001): Most widely deployed version; used by majority of existing SCORM content - SCORM 2004 (2004–2009): 4 editions; advanced sequencing/navigation capabilities; less widely adopted due to complexity
SCORM Limitations
SCORM was designed for browser-based, connected, desktop learning experiences. It cannot track: - Learning on mobile devices offline (no network connection means no API communication) - Informal learning (reading an article, watching YouTube, on-the-job experiences) - Social/collaborative learning (discussions, peer feedback) - Learning over extended time periods across multiple devices and sessions These limitations drove the development of xAPI.
3.2 xAPI (Tin Can API / Experience API)¶
xAPI (also called Tin Can API or Experience API) was released in 2013 as a fundamentally different approach to learning data capture. Instead of course-to-LMS communication via a JavaScript API, xAPI uses simple HTTP POST statements in JSON format to record any learning experience to a Learning Record Store (LRS).
xAPI statement structure:
{
"actor": {
"mbox": "mailto:jsmith@example.com",
"name": "John Smith",
"objectType": "Agent"
},
"verb": {
"id": "http://adlnet.gov/expapi/verbs/completed",
"display": {"en-US": "completed"}
},
"object": {
"id": "https://example.com/courses/introduction-to-sql",
"definition": {
"name": {"en-US": "Introduction to SQL"},
"type": "http://adlnet.gov/expapi/activities/course"
},
"objectType": "Activity"
},
"result": {
"score": {"scaled": 0.92, "raw": 92, "min": 0, "max": 100},
"completion": true,
"success": true,
"duration": "PT2H15M"
},
"timestamp": "2024-03-15T14:30:00.000Z"
}
What xAPI enables that SCORM cannot: - Track mobile offline learning (statements submitted when connection restores) - Track simulations, games, physical training, on-the-job experiences - Track reading (time spent on article pages) - Track video watching (play, pause, seek, complete events) - Aggregate learning data from multiple systems into one LRS
4. Instructional Design Models¶
4.1 ADDIE¶
ADDIE (Analysis, Design, Development, Implementation, Evaluation) is the foundational instructional design process model. Developed in the 1970s at Florida State University for the U.S. Army, it remains the most widely taught and practiced framework.
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ ANALYSIS │───▶│ DESIGN │───▶│ DEVELOPMENT │
│ │ │ │ │ │
│ • Who are │ │ • Learning │ │ • Build │
│ learners? │ │ objectives│ │ content │
│ • What are │ │ • Sequence │ │ • Create │
│ the gaps? │ │ • Assessment│ │ media │
│ • Context? │ │ strategy │ │ • Alpha test│
└─────────────┘ └─────────────┘ └─────────────┘
│
┌─────────────┐ ┌─────────────┐ │
│ EVALUATION │◀───│IMPLEMENTATION│◀──────────┘
│ │ │ │
│ • Kirkpatrick│ │ • Pilot │
│ levels │ │ • Deploy │
│ • Analytics │ │ • Support │
│ • Iteration │ │ • Maintain │
└─────────────┘ └─────────────┘
ADDIE phases in detail:
| Phase | Key Activities | Outputs |
|---|---|---|
| Analysis | Learner needs analysis, task analysis, environment analysis | Needs assessment report, learner personas |
| Design | Learning objectives (Bloom's taxonomy), instructional strategies, assessment blueprint, storyboards | Design document, content outline |
| Development | Content authoring (Articulate Storyline, Rise, Adobe Captivate), media production, LMS course setup | SCORM package, course shell |
| Implementation | Pilot testing with sample learners, LMS deployment, instructor training | Deployed course, training materials |
| Evaluation | Formative (during) and summative (after) evaluation; Kirkpatrick model | Evaluation report, revision recommendations |
4.2 Bloom's Taxonomy for Learning Objectives¶
Well-designed e-learning must specify measurable learning objectives at the appropriate cognitive level. Bloom's Revised Taxonomy (Anderson & Krathwohl, 2001) provides a six-level hierarchy:
| Level | Verbs for Objectives | Assessment Methods |
|---|---|---|
| Remember | Define, list, recall, identify | Multiple choice, matching, fill-in-the-blank |
| Understand | Explain, describe, summarize, classify | Short answer, concept mapping, paraphrase |
| Apply | Use, demonstrate, solve, execute | Problem sets, case scenarios, simulations |
| Analyze | Differentiate, examine, compare, break down | Case analysis, compare-contrast essays, debugging exercises |
| Evaluate | Judge, justify, assess, critique | Peer review, argument construction, project evaluation |
| Create | Design, build, develop, produce | Projects, portfolios, original artifacts |
Measurable Objectives
Every learning objective should follow the format: "Given [condition], the learner will [verb at appropriate Bloom level] [specific knowledge/skill] [to a defined standard]."
Poor: "Students will understand database normalization."
Better: "Given an unnormalized database schema with sample data, the student will identify all functional dependencies and normalize the schema to 3NF with fewer than two errors."
4.3 Kirkpatrick Model for Evaluation¶
Donald Kirkpatrick's four-level training evaluation model is the standard framework for measuring e-learning effectiveness:
| Level | Measures | Methods | E-Learning Implementation |
|---|---|---|---|
| 1. Reaction | Learner satisfaction and perceived relevance | Post-course surveys | LMS-integrated survey (5-star rating, open comments) |
| 2. Learning | Knowledge/skill acquisition | Pre/post assessments, skills demonstrations | LMS quiz comparison; simulation performance |
| 3. Behavior | Transfer of learning to job performance | Manager observations, 360 feedback, 60/90-day follow-up | LMS follow-up assessments; performance management system integration |
| 4. Results | Business outcomes | KPI tracking, ROI calculation | Integration with HR, sales, quality systems |
5. MOOC Platforms and Business Models¶
5.1 The MOOC Landscape¶
Massive Open Online Courses emerged from a 2008 experiment by George Siemens and Stephen Downes (Connectivism and Connective Knowledge, 1,900+ external students enrolled alongside 25 for-credit students at University of Manitoba). The modern MOOC era launched in fall 2011 when Stanford opened three CS courses to the public simultaneously.
Major MOOC platforms:
- Founded: 2012 (Andrew Ng and Daphne Koller, Stanford)
- Scale: 148+ million registered learners, 7,000+ courses (2024)
- Partners: 325+ universities and companies
- Business model:
- Free audit: watch videos, participate in discussions (no certificate)
- Paid certificate: $39-$99 per course to access graded assignments and earn certificate
- Specializations: Bundles of 4-6 courses (~$39-89/month subscription)
- Professional Certificates: Industry credentials (Google, Meta, IBM) directly on platform
- Degrees: ~25 full online degree programs (~$10,000-25,000 total)
- Coursera for Business: Enterprise learning subscriptions
- Founded: 2012 (MIT and Harvard)
- Scale: 46+ million learners, 4,000+ courses (2024); acquired by 2U in 2021
- Partners: MIT, Harvard, Berkeley, Microsoft, Google and 200+ more
- Business model:
- Free audit track; verified certificates ($50-$300)
- MicroMasters Programs: Graduate-level credential usable as credit in partner master's degrees
- Professional Certificates: 4-7 course professional credentials
- Boot Camps: Intensive coding, data science, cybersecurity programs (~$7,000-15,000)
- Founded: 2010
- Scale: 65+ million students, 210,000+ courses, 74,000+ instructors (2024)
- Business model:
- Marketplace model: any instructor can create and sell courses
- Student price: $12-200 per course (frequent discounts to $10-15)
- Instructor revenue share: 37% (instructor-promoted sales) to 97% (direct sales)
- Udemy Business: Enterprise subscription for curated course library
- Key differentiator: Bottom-up content creation by practitioners; fastest to market for emerging skills
5.2 MOOC Completion Rate Problem¶
MOOC completion rates are notoriously low — typically 3-15% of enrolled students complete a course. This number requires careful interpretation:
The Completion Rate Debate
Comparing MOOC completion rates (3-15%) to traditional course completion rates (85-95%) is misleading because:
- MOOC enrollment is zero-cost and zero-commitment — many students "enroll to browse" with no intention of completing
- Many learners achieve their goal (learn one specific concept) without completing the full course
- Intent-to-complete rates (learners who pay for certificates) are 60-80%
- Selective consumption is a feature, not a failure — the ability to learn exactly what you need without forced breadth is an advantage
The more meaningful metric is learning outcome achievement among learners with genuine completion intent — which is competitive with traditional education.
Factors improving MOOC completion rates: 1. Paid enrollment (financial commitment increases follow-through) 2. Cohort-based delivery with fixed deadlines and peer interaction 3. Short overall course length (4 weeks vs. 12 weeks) 4. Strong community features (discussion forums, study groups) 5. Practical, job-relevant content with clear career outcomes 6. Peer accountability mechanisms
6. Assessment Design in LMS¶
6.1 Assessment Types in Online Learning¶
| Assessment Type | Best For | LMS Implementation |
|---|---|---|
| Multiple choice quiz | Recall, comprehension | Auto-graded; quiz banks with randomization |
| Essay/written response | Analysis, synthesis, evaluation | Manual grading; rubric-based SpeedGrader |
| File submission | Applied projects, research papers | Turnitin integration for plagiarism detection |
| Discussion board | Collaborative reflection, peer learning | Forum grading with rubric |
| Peer review | Evaluation skill, metacognition | Structured peer review workflows (Canvas: Peer Review) |
| Interactive simulation | Application, procedural skills | xAPI tracking of simulation performance data |
| Video response | Demonstration, oral presentation | Canvas Studio, Flip (Microsoft) |
6.2 Academic Integrity: Turnitin Integration¶
Turnitin is the leading plagiarism detection service, analyzing submitted text against: - 70+ billion web pages - 69+ million published academic works - Turnitin's own database of previously submitted student work
How Turnitin works:
Student submits paper
│
▼
Turnitin fingerprints text (rolling hash comparison)
│
▼
Similarity report generated (0-100% similarity score)
│
▼
Instructor reviews matches highlighted in original document
│
├── High similarity to web source → potential web plagiarism
├── High similarity to database paper → potential academic plagiarism
└── High similarity to own prior work → potential self-plagiarism
Turnitin Limitations
Similarity score ≠ plagiarism. A paper heavily quoting primary sources with proper citation may show 40% similarity and be perfectly legitimate. A paper with 5% similarity but paraphrased stolen ideas without citation is plagiarism. Instructors must interpret similarity reports in context, never use a threshold automatically.
6.3 Quiz Banks and Randomization¶
Effective online assessment uses question banks with randomization to reduce the effectiveness of sharing answers among students:
Question Pool: 50 questions on Topic A
Question Pool: 40 questions on Topic B
Question Pool: 30 questions on Topic C
Student quiz drawn from:
- 10 random questions from Topic A pool
- 8 random questions from Topic B pool
- 7 random questions from Topic C pool
= 25 question quiz; no two students get identical questions
7. Gamification in E-Learning¶
7.1 Gamification Theory¶
Gamification is the application of game design elements to non-game contexts to increase engagement and motivation. In e-learning, gamification taps into intrinsic and extrinsic motivation mechanisms identified by Self-Determination Theory (Deci & Ryan): autonomy, competence, and relatedness.
Gamification vs. Game-Based Learning
Gamification adds game elements (points, badges, leaderboards) to existing instructional content.
Game-Based Learning (GBL) is learning through playing a purpose-built game — the game is the instruction.
These are distinct approaches with different design requirements and evidence bases.
7.2 Gamification Elements in E-Learning¶
- Experience points (XP): Accumulated for completing activities, quizzes, contributions
- Progress bars: Visual representation of completion percentage toward a goal
- Streaks: Consecutive days/weeks of engagement rewarded; habit formation
- Level systems: Learners "level up" as they accumulate points, unlocking new content
Duolingo implementation: XP earned per lesson, streak counter, daily goal system — among the most studied gamification implementations in e-learning research
- Completion badges: Awarded for finishing a module, course, or learning path
- Achievement badges: For exceptional performance (top 10% quiz score, first to complete)
- Skill badges: Certifying demonstration of a specific competency
- Open Badges (IMS Global standard): Portable digital credentials with metadata embedded in image; earnable on one platform, shareable to LinkedIn, email, digital portfolios
Mozilla Open Badges ecosystem: Enables badge earners to collect credentials from multiple sources in a single "backpack"
- Rank learners or teams by points, completion, assessment scores
- Cohort leaderboards: Rank within class section (reduces discouragement vs. global ranking)
- Team leaderboards: Collaborative competition between groups (promotes peer learning)
- Anonymized options: Some learners prefer not to see their rank publicly
Research caution: Leaderboards increase engagement for high performers but can demotivate lower performers who feel the gap is insurmountable. Design leaderboards around effort metrics (activities completed, forum posts) rather than pure performance metrics.
- Story-based scenarios: Present real-world situations requiring learners to make decisions
- Branching scenarios: Choices lead to different outcomes, teaching consequences
- Role-play simulations: Virtual characters respond realistically to learner choices
- Escape room formats: Learners solve problems to "unlock" the next section
8. Accessibility in E-Learning¶
8.1 Section 508 for Educational Technology¶
Educational institutions receiving federal funding (virtually all U.S. colleges and universities) must ensure their technology is accessible under Section 508 of the Rehabilitation Act. For e-learning, this means:
- LMS accessibility: The platform itself must be accessible (VPAT — Voluntary Product Accessibility Template — documents compliance)
- Course content accessibility: Instructor-created content must also be accessible
- Third-party tool accessibility: Every integrated tool (Turnitin, VoiceThread, Kahoot, etc.) must have documented accessibility compliance
8.2 E-Learning Specific Accessibility Requirements¶
| Content Type | Accessibility Requirement | Implementation |
|---|---|---|
| Video lectures | Accurate captions (not auto-captions alone); audio description if visual content is information-bearing | YouTube auto-captions + manual correction; 3Play Media professional captioning |
| PDF documents | Tagged PDF structure enabling screen reader reading order | Export from Word with accessibility check; Adobe Acrobat Pro tagging |
| Interactive modules (SCORM) | Keyboard navigable; screen reader compatible; sufficient color contrast | Articulate Storyline accessibility settings; NVDA/JAWS testing |
| Images | Alt text describing informational content | Canvas rich content editor alt text field; decorative images marked null alt |
| Discussion boards | Accessible editor for posting; thread structure navigable | LMS platform responsibility + instructor modeling |
| Mathematical content | MathML or LaTeX rendering; accessible to math screen readers (MathJax + screen reader) | MathJax + NVDA+MathPlayer; avoid math as images |
9. Learning Analytics and AI-Adaptive Learning¶
9.1 Learning Analytics Framework¶
Learning analytics is the measurement, collection, analysis, and reporting of data about learners and their contexts to optimize learning and the environments in which it occurs (Society for Learning Analytics Research, 2011).
Levels of learning analytics:
┌─────────────────────────────────────────────────────────────┐
│ INSTITUTIONAL ANALYTICS: Enrollment patterns, completion │
│ rates, program-level outcomes, accreditation reporting │
├─────────────────────────────────────────────────────────────┤
│ COURSE-LEVEL ANALYTICS: Engagement by module, quiz item │
│ analysis, discussion participation patterns │
├─────────────────────────────────────────────────────────────┤
│ LEARNER-LEVEL ANALYTICS: Individual progress, time-on-task,│
│ performance trajectories, at-risk identification │
└─────────────────────────────────────────────────────────────┘
9.2 Early Alert Systems¶
Early alert systems (also called early warning systems) use predictive analytics to identify students showing behavioral indicators of academic risk before they fail — when intervention is still effective.
Common early alert indicators: - Login frequency below course average - Days since last login (e.g., no login in 7+ days) - Assignment submission rate (missing or late submissions) - Quiz/assessment scores declining over time - Discussion forum non-participation - Grade currently below course passing threshold
# Simplified early alert algorithm
def calculate_risk_score(student_data):
risk_score = 0
# Days since last login (max 30 points)
days_inactive = student_data['days_since_login']
risk_score += min(days_inactive * 2, 30)
# Missing assignments (max 40 points)
missing_pct = student_data['missing_assignments'] / student_data['total_assignments']
risk_score += missing_pct * 40
# Current grade below threshold (max 30 points)
if student_data['current_grade'] < 70:
grade_deficit = 70 - student_data['current_grade']
risk_score += min(grade_deficit, 30)
return risk_score # 0-100; >60 triggers alert
# Alert levels
def get_alert_level(risk_score):
if risk_score >= 75:
return "RED - Immediate intervention required"
elif risk_score >= 50:
return "YELLOW - Proactive outreach recommended"
else:
return "GREEN - No action required"
9.3 AI-Adaptive Learning Systems¶
AI-adaptive learning uses machine learning algorithms to continuously adjust the learning experience based on individual learner performance, pace, and behavior patterns.
Khan Academy's adaptive engine: Khan Academy's mastery-based system uses a knowledge graph of learning dependencies and learner performance data to determine: - Which skill to present next based on prerequisite mastery - When a student has demonstrated sufficient mastery to progress - What type of hint to provide when a student is struggling
Duolingo's algorithms: Duolingo uses Bayesian knowledge tracing to model the probability that a learner has truly learned a concept (vs. lucky guessing). The spaced repetition algorithm determines when to review previously learned content based on forgetting curve models.
P(learned | correct answer) = P(learned) × P(correct | learned) /
[P(learned) × P(correct | learned) +
P(not learned) × P(correct | not learned)]
Adaptive Learning Ethical Considerations
AI-adaptive systems raise important questions: Do algorithmic "tracks" limit learner agency and reinforce pre-existing patterns? Who owns the granular behavioral learning data? Can adaptive systems detect and accommodate learners with disabilities, rather than optimizing past them? These are active areas of research and policy debate.
10. Corporate Training and eLearning ROI¶
10.1 Corporate E-Learning Market¶
Corporate training represents a massive application of e-learning technology. U.S. companies spent $101.8 billion on corporate training in 2023 (Statista), with approximately 42% delivered through e-learning platforms.
Corporate LMS leaders (different from academic LMS): - Cornerstone OnDemand: Full talent management suite with integrated LMS - SAP SuccessFactors Learning: Deep HCM integration for global enterprises - Workday Learning: Native integration with Workday HCM/Payroll - LinkedIn Learning: Content library of 20,000+ courses plus LMS capabilities - Docebo: AI-driven enterprise LMS with skills inference
10.2 Measuring E-Learning ROI¶
The Kirkpatrick model (discussed in Section 4.3) provides the framework, but corporate learning ROI requires converting Level 3 and Level 4 data to financial metrics:
E-Learning ROI Formula:
ROI (%) = [(Benefits - Costs) / Costs] × 100
Where:
Benefits = (improvement in performance metric × business value per unit)
Costs = development + delivery + learner time + technology + administration
Example:
Sales training increased average deal size from $45,000 to $52,000
100 salespeople trained × $7,000 additional revenue = $700,000 benefit
Training cost: $150,000 development + $50,000 delivery = $200,000
ROI = ($700,000 - $200,000) / $200,000 × 100 = 250%
Key Vocabulary¶
| Term | Definition |
|---|---|
| LMS | Learning Management System — software platform for managing, delivering, and tracking learning content |
| SCORM | Sharable Content Object Reference Model — standard for e-learning content packaging and LMS communication |
| xAPI / Tin Can | Experience API — modern learning data standard using JSON statements to record any learning experience |
| LRS | Learning Record Store — database that stores xAPI statements from multiple sources |
| ADDIE | Instructional design process model: Analysis, Design, Development, Implementation, Evaluation |
| Bloom's Taxonomy | Six-level cognitive hierarchy (Remember, Understand, Apply, Analyze, Evaluate, Create) for writing learning objectives |
| Kirkpatrick Model | Four-level evaluation model measuring Reaction, Learning, Behavior, and Results |
| MOOC | Massive Open Online Course — online course offering unlimited participation and open access |
| Gamification | Application of game design elements (points, badges, leaderboards) to non-game learning contexts |
| Open Badges | IMS Global standard for portable digital credentials with embedded metadata |
| Learning Analytics | Data-driven analysis of learner behavior and outcomes to optimize learning |
| Early Alert System | Predictive analytics tool identifying at-risk students based on behavioral indicators |
| Adaptive Learning | AI-driven personalization of learning content, pacing, and difficulty based on individual performance |
| Synchronous Learning | Learning happening in real-time with instructor and learners present simultaneously (live class, webinar) |
| Asynchronous Learning | Self-paced learning without real-time interaction; learners engage on their own schedule |
| Bayesian Knowledge Tracing | Statistical model estimating the probability that a learner has mastered a concept |
Review Questions¶
Week 13 Review Questions
1. A corporate training manager at a pharmaceutical company needs to train 5,000 global sales representatives on a new drug's features, dosage guidelines, and compliance requirements. The training must be completed within 30 days and the completion/score data must be reported to the FDA. Recommend a delivery approach and justify your SCORM vs. xAPI choice. What specific data points need to be captured?
2. Compare MOOC business models across Coursera, edX, and Udemy. If you were advising a university considering developing courses on all three platforms, what strategy would you recommend? Consider: revenue potential, institutional brand value, audience reach, and resource requirements.
3. Apply the ADDIE model to design a 2-hour online module teaching undergraduate students to identify phishing emails. For each phase, describe the key activities and identify the primary output. What Bloom's taxonomy level should your learning objectives target, and what type of assessment aligns with that level?
4. You are reviewing D2L Brightspace learning analytics for a 200-student online Introduction to Business course. The data shows: 31 students have not logged in for 10+ days; 18 of those students have missed at least one assignment; quiz averages have been declining for 3 weeks. Design an early alert intervention protocol with specific actions, responsible parties, and timeline.
5. A student with low vision reports that the SCORM course you purchased for your corporate training program is not accessible with their screen magnification software. The vendor's VPAT claims WCAG 2.1 AA compliance. Describe the investigation process you would follow, the specific tests you would conduct, and the remediation options available to you.
Further Reading¶
- Clark, R. C., & Mayer, R. E. (2016). E-Learning and the Science of Instruction (4th ed.). Wiley.
- Means, B., Bakia, M., & Murphy, R. (2014). Learning Online: What Research Tells Us About Whether, When and How. Routledge.
- Siemens, G. (2013). "Learning Analytics: The Emergence of a Discipline." American Behavioral Scientist, 57(10), 1380-1400.
- Nonaka, I., & Takeuchi, H. (1995). The Knowledge-Creating Company. Oxford University Press. (Chapter 3 on knowledge spiral)
- ADL Initiative. (2023). xAPI Specification. https://github.com/adlnet/xAPI-Spec
- IMS Global. (2023). LTI Advantage Specification. https://www.imsglobal.org/lti-advantage-overview
- Anderson, L. W., & Krathwohl, D. R. (2001). A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom's Taxonomy. Longman.
- Kirkpatrick, J. D., & Kirkpatrick, W. K. (2016). Kirkpatrick's Four Levels of Training Evaluation. ATD Press.
- Kapp, K. M. (2012). The Gamification of Learning and Instruction. Pfeiffer/Wiley.