image
imagewidth (px) 1.92k
1.92k
|
|---|
CQ(λ) Bag-Shaking Dataset: Human-in-the-Loop Reinforcement Learning
Dataset Description
This dataset contains synthetic training data comparing standard Q-learning with eligibility traces [Q(λ)] against Cooperative Q-learning [CQ(λ)], a human-in-the-loop reinforcement learning algorithm. The data simulates a robotic "bag-shaking" task where an agent must extract knotted objects from a bag through strategic shaking motions.
Dataset Summary
- Task: Bag-shaking manipulation with hidden state variables (knot tightness, entanglement)
- Algorithms Compared:
- Q(λ): Pure autonomous learning with eligibility traces
- CQ(λ): Performance-triggered human intervention with linguistic policy shaping
- Episodes: 500 learning episodes × 10 runs × 2 conditions = 10,000 total episodes
- Format: CSV files + PNG visualizations + Markdown comparison table
- Generation: Synthetic data from simulation (not real robot)
Research Context
Based on:
- Kartoun, U., Stern, H., & Edan, Y. (2006). "Human-robot collaborative learning system for inspection" IEEE SMC
- Kartoun, U., Stern, H., & Edan, Y. (2010). "A human-robot collaborative reinforcement learning algorithm" Journal of Intelligent & Robotic Systems, 60, 217-239.
Key Innovation: Human expert provides linguistic guidance ("significantly increase", "slightly decrease", etc.) to reshape Q-values during low-performance episodes, accelerating learning.
Source code
The full implementation is available on GitHub at https://github.com/DBbun/motoman-up6-cq-lambda-reinforcement-learning_v1.0
Dataset Files
📊 Core Data Files
| File | Description | Rows | Key Columns |
|---|---|---|---|
episodes.csv |
Episode-level metrics for both Q(λ) and CQ(λ) | 10,000 | condition, run_id, episode_id, total_reward, accumulated_reward, success_flag, mode (A/SA) |
steps.csv |
Step-by-step state transitions and rewards | ~500,000 | s_id (state), a_id (action), reward, objects_dropped, knot_tightness, bag_entanglement |
interventions.csv |
Log of all human interventions (CQ only) | ~1,200 | episode_id, L_ave (performance trigger), center_control_X/Y/Z, swing_control_X/Y/Z |
📈 Visualization Files (PNG, 240 DPI)
| File | Shows |
|---|---|
accumulated_reward_q_vs_cq.png |
Cumulative reward over 500 episodes - Primary metric showing CQ(λ) advantage |
reward_per_episode_q_vs_cq.png |
Episode-wise reward comparison |
time_per_episode_q_vs_cq.png |
Episode duration (lower = more efficient) |
success_rate_q_vs_cq.png |
Success rate over time (smoothed) |
l_ave_q_vs_cq.png |
Performance trigger signal - Shows when L_ave < Λ triggers human intervention |
sa_fraction_cq.png |
Fraction of CQ(λ) episodes in semi-autonomous (SA) mode |
📋 Summary Tables
| File | Description |
|---|---|
comparison_table.csv |
Statistical comparison of Q(λ) vs CQ(λ) performance |
comparison_table.md |
Markdown-formatted version (GitHub-friendly) |
Quick Start
Using the Dataset
import pandas as pd
import matplotlib.pyplot as plt
# Load episode data
episodes = pd.read_csv("episodes.csv")
# Compare accumulated rewards
q_data = episodes[episodes['condition'] == 'Q']
cq_data = episodes[episodes['condition'] == 'CQ']
# Plot comparison
plt.figure(figsize=(10, 6))
for run in range(1, 11):
q_run = q_data[q_data['run_id'] == run].sort_values('episode_id')
cq_run = cq_data[cq_data['run_id'] == run].sort_values('episode_id')
plt.plot(q_run['episode_id'], q_run['accumulated_reward'],
'b--', alpha=0.3, linewidth=0.8)
plt.plot(cq_run['episode_id'], cq_run['accumulated_reward'],
'g-', alpha=0.3, linewidth=0.8)
plt.xlabel("Episode")
plt.ylabel("Accumulated Reward")
plt.title("Q(λ) vs CQ(λ): Human Guidance Advantage")
plt.legend(["Q(λ) (autonomous)", "CQ(λ) (human-assisted)"])
plt.grid(True, alpha=0.3)
plt.show()
Analyzing Interventions
# Load intervention data
interventions = pd.read_csv("interventions.csv")
# See what guidance human provided
print(interventions[['episode_id', 'L_ave', 'center_control_Y', 'swing_control_Y']].head())
# Count interventions over time
intervention_counts = interventions.groupby('episode_id').size()
print(f"Total interventions: {len(interventions)}")
print(f"Peak intervention episode: {intervention_counts.idxmax()}")
Exploring Step-Level Dynamics
# Load step data
steps = pd.read_csv("steps.csv")
# Analyze a specific successful episode
episode_steps = steps[(steps['condition'] == 'CQ') &
(steps['run_id'] == 1) &
(steps['episode_id'] == 100)]
# Plot hidden state evolution
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(10, 6))
ax1.plot(episode_steps['step_idx'], episode_steps['knot_tightness'])
ax1.set_ylabel("Knot Tightness")
ax1.set_title("Hidden State Dynamics")
ax1.grid(True, alpha=0.3)
ax2.plot(episode_steps['step_idx'], episode_steps['objects_remaining'], marker='o')
ax2.set_ylabel("Objects Remaining")
ax2.set_xlabel("Step")
ax2.grid(True, alpha=0.3)
plt.tight_layout()
plt.show()
Data Schema
episodes.csv
| Column | Type | Description |
|---|---|---|
condition |
str | "Q" (autonomous) or "CQ" (human-assisted) |
run_id |
int | Independent run identifier (1-10) |
episode_id |
int | Episode within run (1-500) |
mode |
str | "A" (autonomous) or "SA" (semi-autonomous with human) |
L_ave_before |
float | Moving average success rate before this episode |
Lambda |
float | Performance threshold (Λ = 0.65) for triggering intervention |
window_N |
int | Moving average window size (N = 15) |
epsilon |
float | Exploration rate (ε-greedy) for this episode |
steps |
int | Number of steps taken |
time_seconds |
float | Total episode duration |
total_reward |
float | Sum of time-weighted rewards in episode |
accumulated_reward |
float | Cumulative reward across all episodes so far |
objects_dropped_total |
int | Number of objects extracted this episode |
success_flag |
int | 1 if total_reward > 25.0, else 0 |
human_intervened |
int | 1 if human provided guidance this episode (CQ only) |
human_intervention_count_so_far |
int | Total interventions up to this point |
steps.csv
| Column | Type | Description |
|---|---|---|
condition |
str | "Q" or "CQ" |
run_id |
int | Run identifier |
episode_id |
int | Episode identifier |
step_idx |
int | Step within episode (1-based) |
mode |
str | "A" or "SA" |
epsilon |
float | Exploration rate |
s_id |
str | State identifier (e.g., "S(CENTER)", "S(Y+2)") |
a_id |
str | Action identifier (e.g., "A(X+3,v=1500)") |
s_next_id |
str | Next state identifier |
reward |
float | Time-weighted reward: (objects_dropped / time) × 20 |
objects_dropped |
int | Objects extracted this step |
objects_remaining |
int | Objects still in bag |
t_seconds |
float | Cumulative time in episode |
knot_tightness |
float | Hidden state: knot tightness [0,1] |
bag_entanglement |
float | Hidden state: entanglement [0,1] |
interventions.csv
| Column | Type | Description |
|---|---|---|
condition |
str | Always "CQ" |
run_id |
int | Run identifier |
episode_id |
int | Episode where intervention occurred |
reason |
str | "performance_below_threshold" |
L_ave |
float | Moving average success rate that triggered intervention |
Lambda |
float | Threshold value (0.65) |
window_N |
int | Window size (15) |
center_control_X/Y/Z |
str | Linguistic guidance for center state actions |
swing_control_X/Y/Z |
str | Linguistic guidance for swing state actions |
Linguistic Options: "significantly_increase", "slightly_increase", "keep_current", "slightly_decrease", "significantly_decrease"
Key Findings
From comparison_table.md:
| Metric | Q(λ) | CQ(λ) | Winner |
|---|---|---|---|
| Final Accumulated Reward | ~7,100 ± 380 | ~9,200 ± 350 | CQ(λ) (+30%) |
| Mean Episode Reward | ~14.2 | ~18.4 | CQ(λ) (+30%) |
| Success Rate | ~58% | ~79% | CQ(λ) (+21pp) |
| Mean Episode Time | ~31.5 sec | ~27.1 sec | CQ(λ) (-14%) |
| Human Interventions | 0 | ~1,200 total | — |
Interpretation: Human linguistic guidance during low-performance episodes provides:
- ✓ 30% higher cumulative learning (accumulated reward)
- ✓ 21% absolute improvement in success rate
- ✓ 14% faster task completion (more efficient policies)
- ✓ Earlier convergence to near-optimal behavior
Experimental Design
Configuration Used
This dataset was generated with the following exact parameters:
CONFIG = {
# Dataset size
"num_runs": 10,
"episodes_per_run": 500,
# State & Action space
"axes": ["X", "Y", "Z"],
"axis_levels": 3,
"speed_bins": [1000, 1500],
"allow_mirror_move": True,
"allow_return_to_center": True,
# Episode constraints
"max_steps_per_episode": 160,
"dt_seconds": 0.25,
# Exploration (ε-greedy)
"epsilon_start": 0.50,
"epsilon_end": 0.00,
"epsilon_end_after_episode": 120,
# Q(λ) learning
"gamma": 0.95, # Discount factor
"lambda_": 0.75, # Eligibility trace decay
"alpha": 0.02, # Base learning rate
# CQ(λ) enhancements
"alpha_sa_multiplier": 3.5, # Learning boost during SA mode
# Performance-triggered intervention
"performance_window_N": 15,
"performance_threshold_Lambda": 0.65,
"success_reward_threshold_R": 25.0,
"force_no_human_first_k_episodes": 5,
"max_human_interventions_per_run": 150,
# Linguistic Q-value multipliers
"ui_multipliers": {
"significantly_increase": 2.00,
"slightly_increase": 1.25,
"keep_current": 1.00,
"slightly_decrease": 0.75,
"significantly_decrease": 0.40,
},
# Environment
"num_objects": 5,
"knot_difficulty": 0.93,
"stochasticity": 0.35,
"axis_effectiveness": {"X": 0.50, "Y": 1.00, "Z": 0.35},
"magnitude_effectiveness": {1: 0.40, 2: 0.70, 3: 1.00},
"speed_effectiveness": {1000: 0.80, 1500: 1.00},
# Expert strategy
"human_bias": {
"center_control": {"Y": "significantly_increase",
"Z": "significantly_decrease",
"X": "slightly_decrease"},
"swing_control": {"Y": "significantly_increase",
"Z": "significantly_decrease",
"X": "keep_current"},
},
}
State Space (19 states)
- 1 center state:
S(CENTER) - 18 axis states: 3 axes × 3 displacement levels × 2 directions
- Example:
S(Y+2)= Y-axis, level 2, positive direction
- Example:
Action Space (72 actions from center)
- 3 axes (X, Y, Z)
- 3 displacement levels (1, 2, 3)
- 2 directions (+, -)
- 2 speeds (1000, 1500 units)
- Mirror moves and return-to-center options
Linguistic Policy Shaping
Human expert provides guidance via Q-value scaling:
| Linguistic Command | Q-Value Multiplier | Effect |
|---|---|---|
| "significantly_increase" | ×2.00 | Strongly favor this action type |
| "slightly_increase" | ×1.25 | Moderately favor |
| "keep_current" | ×1.00 | No change |
| "slightly_decrease" | ×0.75 | Moderately discourage |
| "significantly_decrease" | ×0.40 | Strongly discourage |
Expert strategy (aligned with environment effectiveness):
- Center Control: Prioritize Y-axis (most effective), avoid Z-axis (least effective)
- Swing Control: Continue Y-motion, minimize Z-motion
Use Cases
This dataset is valuable for:
- Human-in-the-loop RL research: Benchmark for policy shaping methods
- Sample efficiency studies: Comparing autonomous vs. human-assisted learning curves (500 episodes)
- Intervention strategy analysis: When/how to trigger human guidance (~1,200 interventions logged)
- Transfer learning: Pre-training on synthetic data before real robot deployment
- Curriculum learning: Progressive difficulty via adjustable environment parameters
- Explainable AI: Linguistic commands as interpretable policy modifications
- Long-horizon learning: Extended 500-episode training for convergence analysis
Verification & Quality Checks
To verify dataset quality, examine:
- SA episodes clustered early (episodes 5-250) in the
sa_fraction_cq.pngplot - CQ(λ) L_ave crosses Λ threshold earlier than Q(λ) in
l_ave_q_vs_cq.png - Accumulated reward gap widens over time in
accumulated_reward_q_vs_cq.png - All "Better" entries favor CQ(λ) in
comparison_table.md(except time, where lower is better) - Interventions taper off as CQ(λ) performance improves (see
interventions.csv) - Success rate convergence - Both algorithms reach stable performance by episode 400-500
Reproduction
The dataset was generated using the included Python script:
python cq_simulator_v1.0.py
Requirements: Python 3.7+, matplotlib
Configuration: All parameters in CONFIG dictionary (lines 33-90)
Deterministic: Random seed (42) ensures reproducibility
Citation
If you use this dataset in your research, please cite:
@inproceedings{kartoun2006cq,
title={Human-robot collaborative learning system for inspection},
author={Kartoun, Uri and Stern, Helman and Edan, Yael},
booktitle={IEEE International Conference on Systems, Man and Cybernetics},
year={2006}
}
@article{kartoun2010cq,
title={A human-robot collaborative reinforcement learning algorithm},
author={Kartoun, Uri and Stern, Helman and Edan, Yael},
journal={Journal of Intelligent \& Robotic Systems},
volume={60},
pages={217--239},
year={2010},
publisher={Springer}
}
Dataset Metadata
dataset_info:
name: cq-lambda-bag-shaking
version: 1.0
task_type: reinforcement_learning
domain: robotics_manipulation
learning_paradigm: human_in_the_loop
format: csv
size: ~35MB
episodes: 10000
steps: ~500000
interventions: ~1200
languages: en
license: research_use
Dataset generated: January 2026
Algorithm: CQ(λ) with performance-triggered linguistic policy shaping
Environment: Synthetic bag-shaking task with hidden state variables
- Downloads last month
- 27