🤖 Technical Reference v2.0

Robotics &
Embodied AI

A comprehensive technical deep-dive into sensors, actuators, kinematics, planning algorithms, neural architectures, and the frontier of physically intelligent machines.

12Core Topics
50+Algorithms
8Quiz Questions
6DOFReference
🦾
01 — Mechanics Kinematics & Dynamics
FK

Forward Kinematics

Computes end-effector pose from joint angles. Uses Denavit-Hartenberg (DH) parameters: a series of 4×4 homogeneous transforms T₀ₙ = T₀₁ · T₁₂ · … · Tₙ₋₁ₙ.

T = Rot(z,θ) · Trans(0,0,d) · Trans(a,0,0) · Rot(x,α)
IK

Inverse Kinematics

Finds joint configuration for desired end-effector pose. Analytically closed-form or numerically solved via iterative Jacobian methods. Multiple solutions exist for redundant robots (DOF > 6).

q = J⁺(q) · Δx [Jacobian pseudoinverse]
DH

Denavit-Hartenberg Params

4 parameters per joint: a (link length), α (link twist), d (link offset), θ (joint angle). Reduces kinematic chain to systematic 4×4 matrix products.

[a, α, d, θ] → Tᵢ₋₁,ᵢ
WS

Workspace Analysis

Reachable workspace: all poses reachable with at least one joint config. Dexterous workspace: poses reachable with all orientations. Determined by link lengths, joint limits, and DOF.

SG

Screw Theory (Lie Groups)

Modern alternative to DH. Uses SE(3) Lie group / Lie algebra. Product of Exponentials (PoE) formula: T = e^[S₁]θ₁ · e^[S₂]θ₂ · … · M gives cleaner, singularity-free representation.

T(θ) = e^[S₁]θ₁ · … · e^[Sₙ]θₙ · M
DOF

Degrees of Freedom

Grübler's formula: M = 6(n-1) - Σ(6-fᵢ) for spatial mechanisms. 3R planar arm = 3 DOF. Most industrial arms = 6 DOF. Redundant manipulators ≥ 7 DOF (e.g. KUKA iiwa).

M = 6(n-1-J) + Σfᵢ
IK SOLVER

Cyclic Coordinate Descent (CCD)

Iteratively rotates each joint to minimize end-effector error. Fast for real-time applications. May get stuck in local minima. Used in game character animation and lightweight robotics.

IK SOLVER

FABRIK Algorithm

Forward And Backward Reaching Inverse Kinematics. Operates directly on joint positions, not angles. Very fast convergence, handles constraints well. Popular in real-time character rigs.

IK SOLVER

Jacobian Transpose

Δq = αJᵀΔx. No matrix inversion needed. Slower convergence than pseudoinverse but computationally cheaper. Avoids singularity issues inherent to pseudoinverse methods.

Δq = α · Jᵀ(q) · Δx
IK SOLVER

Damped Least Squares

q̇ = Jᵀ(JJᵀ + λ²I)⁻¹ẋ. Adds damping factor λ to avoid singularities. Also called Levenberg-Marquardt. Standard in industrial robots near workspace boundaries.

q̇ = Jᵀ(JJᵀ + λ²I)⁻¹ẋ
JACOBIAN

Geometric Jacobian

Maps joint velocities q̇ to end-effector velocities ẋ = J(q)q̇. 6×n matrix (3 linear + 3 angular velocity rows). Rank deficiency indicates a kinematic singularity.

ẋ = J(q) · q̇ [6×n matrix]
JACOBIAN

Singularities

Configurations where rank(J) < 6 — robot loses ability to move in certain directions. Types: wrist singularity (collinear last 3 axes), elbow singularity (extended/retracted), shoulder singularity.

JACOBIAN

Manipulability

w = √det(JJᵀ) measures how far from singularity. Yoshikawa manipulability ellipsoid: axes = √σᵢ of J. Used for optimizing redundant motion to maintain dexterity.

w = √det(J·Jᵀ)
JACOBIAN

Null Space Motion

For redundant robots: q̇ = J⁺ẋ + (I - J⁺J)q̇₀. The null space projector (I - J⁺J) allows joint motion without end-effector motion — used for obstacle avoidance, joint limit avoidance.

q̇ = J⁺ẋ + (I-J⁺J)q̇₀
DYNAMICS

Newton-Euler Equations

Recursive algorithm: O(n) complexity. Forward pass: propagate velocities/accelerations outward. Backward pass: propagate forces/torques inward. Most efficient for real-time control.

DYNAMICS

Lagrangian Dynamics

τ = M(q)q̈ + C(q,q̇)q̇ + g(q). Mass matrix M, Coriolis/centrifugal C, gravity g. Elegant for analysis and model-based control. Computationally O(n³) but symbolic simplification possible.

τ = M(q)q̈ + C(q,q̇)q̇ + g(q)
DYNAMICS

Inertia Tensor

I ∈ ℝ³ˣ³ describes mass distribution. Diagonal elements: moments of inertia. Off-diagonal: products of inertia. Diagonalized via principal axes transformation. Key for accurate torque prediction.

DYNAMICS

Trajectory Dynamics

Operational space dynamics: Λ(x)ẍ + μ(x,ẋ) + p(x) = F. Decouples task-space behavior. Khatib's OSF formulation enables force-controlled tasks in Cartesian space.

📡
02 — Perception Sensor Technologies
Range
LiDAR
Depth
RGB-D Camera
Inertial
IMU
Force
F/T Sensor
Tactile
Skin Array
Sound
Sonar / Ultrasonic
Proprioception
Encoders
Radio
mmWave Radar

LiDAR — Light Detection and Ranging

Emits laser pulses and measures Time-of-Flight (ToF). Spinning mechanical LiDARs (Velodyne HDL-64E): 360° coverage, ~10Hz, up to 120m range. Solid-state LiDARs (Livox, Ouster): no moving parts, cheaper, limited FoV. Returns dense 3D point clouds (x,y,z,intensity). Key for SLAM and obstacle detection in autonomous vehicles.

Time-of-FlightSLAM3D Point Cloud905nm / 1550nmVelodyne · Ouster · Livox

RGB-D Camera — Color + Depth

Combines RGB image with per-pixel depth. Methods: Structured Light (Intel RealSense D435 — projects IR pattern), Active Stereo (Luxonis OAK-D), Indirect ToF (Microsoft Azure Kinect). Depth range: 0.1–10m. Used for 3D object detection, manipulation, surface reconstruction, and volumetric mapping (TSDF).

Structured LightActive StereoToFTSDF MappingRealSense · Kinect · OAK-D

IMU — Inertial Measurement Unit

Combines 3-axis accelerometer + 3-axis gyroscope (+ optional magnetometer = 9-DOF). Accelerometer: measures proper acceleration (gravity + linear). Gyroscope: angular velocity. Integration gives pose but drifts — fused with vision/LiDAR via EKF/UKF. MEMS IMUs (Bosch BMI088): small, cheap, ~1kHz. Tactical-grade IMUs (Xsens MTi): fiber-optic gyros, sub-arcsecond drift.

6-DOF · 9-DOFEKF FusionMEMS~1kHzDrift Compensation

Force/Torque Sensor

6-axis F/T sensors (ATI Mini45, OnRobot HEX) measure Fx,Fy,Fz,Tx,Ty,Tz. Based on strain gauges or piezoelectric elements. Resolution: ~0.01N / 0.01Nm. Used in impedance/admittance control, contact detection, assembly tasks. Wrist-mounted or embedded in joints.

6-axisImpedance ControlStrain GaugeATI · OnRobot

Tactile Skin Arrays

Distributed pressure sensing over robot surfaces. Technologies: capacitive (GelSight), piezoresistive, barometric (BioTac), optical (DIGIT sensor). BioTac: 19 electrodes + pressure + temperature. DIGIT: high-res camera + gel for tactile image rendering. Enables slip detection, texture classification, fine manipulation.

CapacitiveGelSightBioTacDIGITSlip Detection

Sonar / Ultrasonic

Emits 40kHz sound pulses, measures echo ToF. Range: 2cm–4m. Wide beam angle (15–30°). Low cost, works in fog/dust. Used in robot bumpers, underwater ROVs (acoustic sonar), parking sensors. Not suitable for fast-moving objects or highly reflective materials.

40kHz2cm–4mBump DetectionUnderwater ROV

Joint Encoders (Proprioception)

Optical encoders: disc with slits, counts pulses → angular position. Incremental (relative) vs absolute. Resolution: 4096 to 1M CPR. Magnetic encoders (AS5048): Hall effect, robust to vibration. Used in every servo actuator for position/velocity feedback. Foundation of all robot control loops.

Incremental · AbsoluteHall Effect4096–1M CPRAS5048 · AMT102

mmWave Radar

77GHz FMCW radar: measures range, velocity (Doppler), angle. Works in rain, dust, fog, darkness. Point cloud sparser than LiDAR. Texas Instruments AWR1843: 4Tx × 3Rx MIMO array. Used for velocity estimation, through-wall detection, people tracking in service robots.

77GHz FMCWDoppler VelocityAll-weatherTI AWR1843MIMO Array
🗺️
03 — Navigation Motion Planning Algorithms
SAMPLING

RRT — Rapidly-exploring Random Tree

Incrementally builds a tree by randomly sampling configuration space. O(n log n) typical. RRT-Connect: bi-directional. RRT*: asymptotically optimal. Highly effective in high-dimensional spaces (6+ DOF). Does not require explicit obstacle model.

q_new = Steer(q_rand, q_near, Δ)
SAMPLING

PRM — Probabilistic Roadmap

Two-phase: learning (build roadmap by random sampling + local planner), query (search roadmap with A*). Multi-query: roadmap reusable. Lazy PRM: defer collision checks. Best for static environments needing many queries.

SEARCH

A* Algorithm

f(n) = g(n) + h(n). Optimal if heuristic h is admissible (never overestimates). Variants: D* (dynamic replanning), Theta* (any-angle), Hybrid A* (non-holonomic vehicles). Operates on discretized grid or graph.

f(n) = g(n) + h(n)
POTENTIAL

Artificial Potential Fields

Goal creates attractive field, obstacles create repulsive fields. Robot follows gradient descent. Simple, real-time. Major issue: local minima (robot gets stuck). Mitigated by random walks or combined with global planner.

F = F_att + F_rep = -∇U(q)
OPTIMAL

Trajectory Optimization

CHOMP, STOMP, TrajOpt. Minimize cost functional: C[ξ] = ∫ obstacle_cost + smoothness dt. Gradient-based or stochastic optimization. Supports constraint satisfaction (collision, dynamics, joint limits) as penalty terms.

C[ξ] = ∫(f_obs + λf_smooth)dt
COVERAGE

SLAM — Simultaneous Localization and Mapping

Builds map while tracking pose simultaneously. Graph-SLAM (offline): g2o, GTSAM pose graph optimization. Online: EKF-SLAM, FastSLAM (particle filter). Visual SLAM: ORB-SLAM3, LIO-SAM (LiDAR-Inertial). SLAM is the core navigation capability of autonomous mobile robots.

🎮
04 — Control Theory Robot Control Systems
TYPICAL ROBOT CONTROL LOOP
Task PlannerHigh Level
Trajectory GenPosition/Vel Path
Motion ControllerPID / MPC / Impedance
Actuator DriverPWM / Torque
Robot PlantPhysical System

Sensor FusionEKF / UKF
State Estimatorq, q̇, contact
FeedbackError Signal
SensorsEncoder / IMU / F-T
Observationy = h(x)
CLASSIC

PID Controller

τ = Kₚe + Kᵢ∫e dt + Kd ė. Proportional-Integral-Derivative. Kₚ: reduce error; Kᵢ: eliminate steady-state error; Kd: damp oscillations. Ziegler-Nichols tuning. Most widely deployed industrial controller.

u = Kₚe + Kᵢ∫e dt + Kd(de/dt)
ADVANCED

Model Predictive Control (MPC)

Solves constrained optimization over prediction horizon at each timestep. Handles state/input constraints explicitly. Real-time MPC (OSQP solver): ~1kHz for legged robots. Used in Boston Dynamics Atlas, quadruped locomotion.

min Σ||xₜ-x*||²Q + ||uₜ||²R s.t. dynamics
COMPLIANT

Impedance Control

Regulates relationship between force and motion: M_d ẍ + B_d ẋ + K_d x = F_ext. Target mass-spring-damper behavior. Enables safe human-robot interaction. Khatib 1987. Used in surgical robots, collaborative arms (iiwa, Franka).

M_d ẍ + B_d ẋ + K_d x = F_ext
NONLINEAR

Feedback Linearization

Cancels nonlinear robot dynamics through exact model inversion. τ = M(q)u + C(q,q̇)q̇ + g(q) where u is the new input. Transforms system into double integrator. Requires accurate dynamic model — robust to model error.

τ = M(q)·(q̈_d + Kv·ė + Kp·e) + n(q,q̇)
LEGGED

Whole-Body Control (WBC)

Hierarchical task-space QP controller. Solves: min ||J_task·q̈ - ẍ_task||² s.t. contact constraints, dynamics. Priority-based: higher priority tasks take precedence. Foundation of modern humanoid/quadruped control (Atlas, Spot, ANYmal).

RL

Reinforcement Learning Control

Policy π(a|s) trained via PPO/SAC in simulation (Isaac Gym, MuJoCo). Sim-to-real transfer via domain randomization. ETH Zurich ANYmal: RL policy for rough terrain locomotion. Boston Dynamics: RL for parkour behaviors. Replaces hand-tuned controllers.

🧠
05 — Intelligence Embodied AI Foundations
CORE IDEA

Embodied Cognition

Intelligence emerges from the interaction between brain, body, and environment — not computation alone. Rodney Brooks' Behavior-based AI (1986): intelligence without explicit world model. Physical grounding enables symbol meaning (Harnad's Symbol Grounding Problem).

PARADIGM

Sense–Plan–Act

Traditional AI robot paradigm: perceive world → build symbolic model → plan actions → execute. Clean, analyzable but slow. Model inaccuracies compound. Replaced by reactive and learning-based systems for many tasks.

LEARNING

Imitation Learning (IL)

Robot learns from human demonstrations. Behavior Cloning (BC): supervised learning on (state, action) pairs. DAgger: corrects distribution shift. Dataset Aggregation. Works well for structured tasks. Limited generalization outside training distribution.

L_BC = E[(π(s) - a_demo)²]
LEARNING

Reinforcement Learning (RL)

Agent learns policy π maximizing cumulative reward. PPO (Proximal Policy Optimization) and SAC (Soft Actor-Critic) dominate. Sparse reward: Hindsight Experience Replay (HER). Sparse reward Locomotion: curriculum training. Multi-task: MTRL, PEARL.

max_π E[Σ γᵗ r(sₜ,aₜ)]
GROUNDING

Language-Conditioned Policies

RT-2 (Google DeepMind): VLM fine-tuned directly as robot policy — takes image + text instruction, outputs robot action tokens. SayCan: LLM scores feasibility × affordance of actions. OpenVLA: open-source VLA with 7B parameters.

WORLD MODEL

World Models in Robotics

Robot internally simulates future states. RSSM (Dreamer): latent dynamics model, plans in imagination. GR-1: generalist robot policy with video prediction. UniSim: neural simulator learns physical world from video data. Enables model-based RL with limited real-world data.

🏗️
06 — Neural Architectures Foundational Models & Policies
VISION-LANGUAGE-ACTION MODEL (VLA)
Camera InputRGB / Depth
Vision EncoderViT / DINOv2
TransformerCross-Attention
Action HeadDiffusion / Token
Robot Commands∆x, ∆y, ∆z, grip
Language Inst."Pick the cup"
LLM EncoderCLIP / T5
POLICY

Diffusion Policy

Models robot action distribution as denoising diffusion process. Learns to reverse Gaussian noise over action sequences. Handles multi-modal action distributions. Chi et al. 2023. State-of-the-art on complex manipulation. DDPM training, DDIM fast inference.

p(aₜ|oₜ) via denoising score matching
POLICY

ACT — Action Chunking with Transformers

Zhao et al. 2023 (Stanford). CVAE encoder-decoder with transformer. Predicts L-step action chunks to reduce compounding error. Temporal ensembling at inference. Trained on 50 bimanual demos per task. Achieves ~80% success on fine manipulation.

FOUNDATION

RT-2 (Robotic Transformer 2)

Google DeepMind 2023. Co-fine-tunes PaLI-X (55B) on robot data. Robot actions as language tokens. Emergent: generalized OOD reasoning, counting, semantic understanding transferred from web scale. 15× better on novel tasks vs RT-1.

FOUNDATION

π₀ (Pi Zero)

Physical Intelligence 2024. Flow Matching policy on top of PaliGemma VLM (3B). Trained on 10,000+ hours of robot data across diverse tasks/embodiments. Zero-shot cross-task transfer. Dexterous manipulation of deformable objects (folding laundry).

FOUNDATION

GR-1 / UniPi

Generalist Robot Policy with video prediction backbone. GR-1 (MIT 2024): GPT backbone predicts future video frames + actions jointly. UniPi: treats robot learning as text-conditioned video generation. Enables planning through imagination.

ARCHITECTURE

Action Tokenization

Discretizes continuous actions into vocabulary tokens (e.g., 256 bins per dimension). Enables LLM to directly predict actions. RT-2, OpenVLA use this. Challenges: precision loss, exponential action space. Alternatives: flow matching, regression head, diffusion head.

# Minimal Diffusion Policy inference (simplified)
import torch
from noise_scheduler import DDIMScheduler

def diffusion_policy_infer(obs, policy_net, scheduler, n_steps=10):
    # Start from pure noise in action space
    noise = torch.randn((1, horizon, action_dim))
    
    for t in scheduler.timesteps[:n_steps]:
        # Predict noise residual conditioned on observation
        noise_pred = policy_net(noise, t, obs_cond=obs)
        # DDIM denoising step
        noise = scheduler.step(noise_pred, t, noise).prev_sample
    
    return noise  # denoised action sequence

# Action chunking: execute first k steps, re-plan
action_chunk = diffusion_policy_infer(obs, policy, sched)
robot.execute(action_chunk[:8])  # execute 8 of 16 steps
07 — Dexterity Grasping & Manipulation
GRASP

Grasp Quality Metrics

Epsilon metric: radius of largest wrench ball in grasp wrench space (GWS). Q₁ = min singular value of G. Force-closure: object can resist arbitrary external wrench. Form-closure: pure geometry, no friction needed. Ferrari-Canny metric: gold standard.

ε = min_{w∈GWS} ||w||
GRASP

Grasp Pose Estimation

GraspNet-1Billion: CNN predicts 6-DOF grasp poses from point clouds. AnyGrasp: foundation model for grasping, 1M+ diverse objects. Contact-GraspNet: estimates parallel-jaw grasps from depth images. GPD (Grasp Pose Detection): evaluates sampled antipodal grasp candidates.

DEXTEROUS

In-Hand Manipulation

OpenAI Dactyl (2019): PPO + LSTM + domain randomization solves Rubik's cube with 24-DOF Shadow Hand. Key: 6000 CPU years in simulation. Sensor: Grasp stability from fingertip pressure + visual tracking. Enables rotation, regrasping, fine assembly.

PLANNING

Task and Motion Planning (TAMP)

Integrates symbolic task planning (PDDL) with continuous motion planning. PDDLStream: streams geometric samplers into task planner. TAMP solves long-horizon manipulation: grasp→place→push sequences. Challenges: combinatorial search, constraint satisfaction.

DEFORMABLE

Deformable Object Manipulation

Cloth, cables, food: complex physical simulation. DiffCloth, PyBullet-Cloth, Flex. Challenges: partial observability, high-dimensional state. Approaches: point cloud tracking (PlasticineLab), graph neural networks, visual imitation. Active research area.

PERCEPTION

6-DOF Object Pose Estimation

FoundPose, FoundationPose (NVIDIA 2024): generalizes to novel objects with single RGB-D reference image. DenseFusion: color+depth fusion. PVN3D: 3D keypoint voting. OnePose: structure-from-motion based, textureless objects. Critical for precise pick-and-place.

📅
08 — History Key Milestones
1948
Norbert Wiener — Cybernetics
Founds feedback control theory as applied to biological and mechanical systems. Birth of modern control engineering.
1961
Unimate — First Industrial Robot
George Devol & Joseph Engelberger deploy first industrial arm at GM plant. 2-ton hydraulic robot for die casting and welding.
1968
Shakey the Robot (SRI)
First mobile robot using AI for navigation. Combined TV camera, bump sensors, A* planning on a PDP-10 computer.
1986
Brooks: Subsumption Architecture
Behavior-based robotics paradigm. Reactive layers replace centralized planning. Enabled fast, robust mobile robots. Inspired Roomba.
2000
Honda ASIMO
First bipedal humanoid to walk and run stably. Zero Moment Point (ZMP) balance control. 26 DOF, 1.2m tall, 52kg.
2005
Stanford DARPA Grand Challenge
Stanley (Stanford) wins 2005 DARPA Grand Challenge. First autonomous vehicle to complete desert course. Spawned modern self-driving.
2012
Deep Learning Revolution
AlexNet wins ImageNet. Deep CNNs transform robot perception. Object detection, segmentation, depth estimation all improve dramatically.
2019
OpenAI Dactyl — Rubik's Cube
PPO + domain randomization solves Rubik's Cube one-handed. Largest demonstration of sim-to-real transfer in dexterous manipulation.
2021
Boston Dynamics Atlas Parkour
Atlas performs gymnastic backflips, parkour. Combines MPC, WBC, and RL-trained behaviors. Real-time perception-action loops.
2023
RT-2: VLA Foundation Models
Google co-trains VLM as robot policy. Emergent cross-task generalization. Beginning of foundation model era for robotics.
2024
Figure 01 / 1X Neo / Unitree G1
Race to commercial humanoid robots. OpenAI + Figure collaboration. Tesla Optimus Gen 2. PI foundation model π₀. Era of embodied AGI begins.
📊
09 — Comparison Planning Algorithm Matrix
Algorithm Completeness Optimality High-DOF Real-Time Best For Complexity
RRT Sampling Prob. No Yes Medium Single-query, cluttered O(n log n)
RRT* Optimal Prob. Asympt. Yes Slow Optimal paths, offline O(n log n)
PRM Sampling Prob. No Yes Fast (query) Multi-query, static env O(n² log n)
A* Search Yes Yes No Medium 2D/3D grid navigation O(b^d)
D* Lite Search Yes Yes No Fast replan Dynamic environments O(k log k)
CHOMP Optim Local Local Yes Medium Smooth, collision-free O(n·iter)
MPPI Sampling Prob. Local Yes GPU fast Nonlinear dynamics, GPU O(N·H) parallel
RL Policy Learning Stoch. Approx. Yes Fast (infer) Complex tasks, locomotion O(1) inference
📖
10 — Reference Technical Glossary
🔍
Actuator
Device that converts energy into mechanical motion. Types: DC motor, servo, hydraulic, pneumatic, piezoelectric, shape-memory alloy.
BVH (Bounding Volume Hierarchy)
Tree of bounding volumes for fast collision detection. O(log n) query. Used in FCL, Bullet physics engines.
C-Space (Configuration Space)
Abstract space where each point represents one robot configuration. C-free: obstacle-free subset. Planning happens in C-space.
CVAE (Conditional VAE)
Variational autoencoder conditioned on context. Used in ACT policy to model multi-modal action distributions from demonstrations.
Domain Randomization
Sim-to-real transfer technique: randomize simulator parameters (friction, mass, lighting) so real world appears as another domain variant.
EKF (Extended Kalman Filter)
Nonlinear state estimator linearizing around current estimate. Used in SLAM, VIO (Visual-Inertial Odometry), pose estimation.
End-Effector
Terminal device on robot arm: gripper, tool, sensor. Position/orientation described in 6D pose. Target of IK solver.
Force Closure
Grasp can resist arbitrary external wrenches using friction. Requires ≥ 2 frictional contacts. Test: grasp wrench space contains origin.
GNN (Graph Neural Network)
Neural network operating on graph-structured data. Used to model robot multi-body dynamics, point clouds, scene graphs.
Homogeneous Transform
4×4 matrix encoding rotation + translation in SE(3). Top-left 3×3: rotation. Top-right 3×1: translation. Bottom row: [0,0,0,1].
Isaac Gym / Isaac Lab
NVIDIA GPU-accelerated physics simulator. Runs 4096+ parallel environments for massively parallel RL training. 100-1000× faster than CPU sim.
Jacobian Pseudoinverse
J⁺ = Jᵀ(JJᵀ)⁻¹ (full row rank) or J⁺ = (JᵀJ)⁻¹Jᵀ (full col rank). Moore-Penrose generalized inverse for IK and force mapping.
KDL (Kinematics & Dynamics Library)
ROS/orocos library for real-time FK, IK, dynamics. C++ implementation. Standard in ROS robot descriptions.
LQR (Linear Quadratic Regulator)
Optimal linear controller minimizing J = ∫(xᵀQx + uᵀRu)dt. Offline Riccati equation solve. Used in linearized bipedal balance.
MuJoCo
Multi-Joint dynamics with Contact. Physics simulator optimized for contact-rich robot manipulation. Free since DeepMind acquired it (2022).
OSQP Solver
Operator Splitting QP solver. Used in real-time MPC for legged robots. Solves QPs at 1kHz with warm-starting. Open-source C library.
PDDL (Planning Domain Definition Language)
Standardized language for symbolic task planning. Defines actions, preconditions, effects. Used in TAMP for long-horizon manipulation.
QP (Quadratic Program)
Optimization: min ½xᵀPx + qᵀx s.t. Ax ≤ b. Core of MPC, WBC controllers. Solved at kHz rates with OSQP, qpOASES, HPIPM.
ROS 2 (Robot Operating System)
Middleware framework: DDS communication, real-time support, security. Nav2, MoveIt2 built on ROS 2. Industry moving from ROS 1.
SE(3) / SO(3)
SE(3): Special Euclidean group — rigid body poses (rotation + translation). SO(3): Special Orthogonal group — pure rotations. Lie groups underlying all robot poses.
SDF (Signed Distance Field)
3D volume where value = distance to nearest surface (negative inside). Used for collision detection, TSDF mapping, NeRF. Fast gradient computation for path optimization.
TSDF (Truncated SDF)
Volumetric 3D reconstruction method. Fuses depth frames into voxel grid. Used in KinectFusion, Open3D. Enables watertight mesh reconstruction in real-time.
URDF (Unified Robot Description Format)
XML format defining robot kinematic/dynamic/visual model. Links, joints, collision geometry, inertial params. Standard in ROS. Parsed by KDL, MoveIt, PyBullet.
ViT (Vision Transformer)
Transformer applied to image patches. Foundation of modern robot vision encoders (DINOv2, SigLIP). Captures global context vs CNNs' local receptive fields.
ZMP (Zero Moment Point)
Point on ground where net moment of gravity + inertial forces = zero about horizontal axes. Inside support polygon → stable. Core of ASIMO, early humanoid balance.
🧩
11 — Test Knowledge Technical Quiz