显示HN:我给我的机器人增加了物理内存——它停止重复犯错。
Show HN: I gave my robot physical memory – it stopped repeating mistakes

原始链接: https://github.com/robotmem/robotmem

## robotmem:一种用于机器人的结构化记忆系统 robotmem是一个新颖的系统,旨在通过存储和检索过去经验来改进机器人学习。在运行1000次实验后,使用robotmem的机器人在FetchPush任务中,**成功率提高了25%**(从42%到67%),仅在CPU上花费了5分钟即可实现。 该系统通过使用`learn`、`recall`和`save_perception`等API记录每个“episode”的参数、轨迹和结果。与简单的向量搜索不同,robotmem理解机器人经验的*结构*,允许基于成功、空间邻近性和特定参数通过`context_filter`和`spatial_sort`进行检索。 主要功能包括自动记忆巩固(合并相似经验)和后续episode的主动回忆。数据存储在单个SQLite数据库中,可通过Python导入或Web UI访问。robotmem通过提供结构化过滤、空间检索和物理参数存储来区分于现有的记忆系统(MemoryVLA、Mem0),这些对于机器人应用至关重要。

黑客新闻 新的 | 过去的 | 评论 | 提问 | 展示 | 招聘 | 提交 登录 Show HN: 我给我的机器人提供了物理内存——它停止重复犯错 (github.com/robotmem) 11 分,来自 robotmem 2小时前 | 隐藏 | 过去的 | 收藏 | 讨论 帮助 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请YC | 联系 搜索:
相关文章

原文

中文版

Your robot ran 1000 experiments, starting from scratch every time. robotmem stores episode experiences — parameters, trajectories, outcomes — and retrieves the most relevant ones to guide future decisions.

FetchPush experiment: +25% success rate improvement (42% → 67%), CPU-only, reproducible in 5 minutes.

robotmem 30s demo: save → restart → recall

from robotmem import learn, recall, save_perception, start_session, end_session

# Start an episode
session = start_session(context='{"robot_id": "arm-01", "task": "push"}')

# Record experience
learn(
    insight="grip_force=12.5N yields highest grasp success rate",
    context='{"params": {"grip_force": {"value": 12.5, "unit": "N"}}, "task": {"success": true}}'
)

# Retrieve experiences (structured filtering + spatial nearest-neighbor)
memories = recall(
    query="grip force parameters",
    context_filter='{"task.success": true}',
    spatial_sort='{"field": "spatial.position", "target": [1.3, 0.7, 0.42]}'
)

# Store perception data
save_perception(
    description="Grasp trajectory: 30 steps, success",
    perception_type="procedural",
    data='{"sampled_actions": [[0.1, -0.3, 0.05, 0.8], ...]}'
)

# End episode (auto-consolidation + proactive recall)
end_session(session_id=session["session_id"])
API Purpose
learn Record physical experiences (parameters / strategies / lessons)
recall Retrieve experiences — BM25 + vector hybrid search with context_filter and spatial_sort
save_perception Store perception / trajectory / force data (visual / tactile / proprioceptive / auditory / procedural)
forget Delete incorrect memories
update Correct memory content
start_session Begin an episode
end_session End an episode (auto-consolidation + proactive recall)

Structured Experience Retrieval

Not just vector search — robotmem understands the structure of robot experiences:

# Retrieve only successful experiences
recall(query="push to target", context_filter='{"task.success": true}')

# Find spatially nearest scenarios
recall(query="grasp object", spatial_sort='{"field": "spatial.object_position", "target": [1.3, 0.7, 0.42]}')

# Combine: success + distance < 0.05m
recall(
    query="push",
    context_filter='{"task.success": true, "params.final_distance.value": {"$lt": 0.05}}'
)

Context JSON — 4 Sections

{
    "params":  {"grip_force": {"value": 12.5, "unit": "N", "type": "scalar"}},
    "spatial": {"object_position": [1.3, 0.7, 0.42], "target_position": [1.25, 0.6, 0.42]},
    "robot":   {"id": "fetch-001", "type": "Fetch", "dof": 7},
    "task":    {"name": "push_to_target", "success": true, "steps": 38}
}

Each recalled memory automatically extracts params / spatial / robot / task as top-level fields.

Memory Consolidation + Proactive Recall

end_session automatically triggers:

  • Consolidation: Merges similar memories with Jaccard similarity > 0.50 (protects constraint / postmortem / high-confidence entries)
  • Proactive Recall: Returns historically relevant memories for the next episode
cd examples/fetch_push
pip install gymnasium-robotics
PYTHONPATH=../../src python demo.py  # 90 episodes, ~2 min

Three-phase experiment: baseline → memory writing → memory utilization. Expected Phase C success rate 10-20% higher than Phase A.

SQLite + FTS5 + vec0
├── BM25 full-text search (jieba CJK tokenizer)
├── Vector search (FastEmbed ONNX, CPU-only)
├── RRF fusion ranking
├── Structured filtering (context_filter)
└── Spatial nearest-neighbor sorting (spatial_sort)
  • CPU-only, no GPU required
  • Single-file database ~/.robotmem/memory.db
  • MCP Server (7 tools) or direct Python import
  • Web management UI: robotmem web
Feature MemoryVLA (Academic) Mem0 (Product) robotmem
Target users Specific VLA models Text AI Robotic AI
Memory format Vectors (opaque) Text Natural language + perception + parameters
Structured filtering No No Yes (context_filter)
Spatial retrieval No No Yes (spatial_sort)
Physical parameters No No Yes (params section)
Installation Compile from paper code pip install pip install
Database Embedded Cloud Local SQLite

Apache-2.0

联系我们 contact @ memedata.com