DevLog 250623 SadTalker + MCP Persona Update

> Log Date: 2025-06-23

The focus this week was tight and productive: getting SadTalker integrated with the GPU watch pipeline and updating the persona system within the MCP architecture. These updates push the system toward more expressive multi-agent responses.

I finalized the folder structure and moved all animated render output into a dedicated flow. SadTalker now listens for both image and voice input, generates animated heads, and drops video output into the appropriate directory. I also formalized the personalities within the MCP via a clear persona_map.json config.


SadTalker Setup Highlights


Folder Structure Overview

models/sad-talker/
scripts/run_sadtalker.py
scripts/sadtalker_watch.py

triggers/gpu_watch/
├── stable_input/
├── audio_input/
└── stable_output/
    └── sadtalker/

persona_map.json

Updated to reflect distinct roles and script paths for each bot:

{
  "central": {"name": "Aryn", "role": "MCP Interface and Coordinator", "personality": "Friendly, energetic, helpful"},
  "doc": {"name": "Doc", "role": "Technical Architect", "personality": "Doctoral, structured, calm"},
  "kona": {"name": "Kona", "role": "Creative AI Assistant", "personality": "Warm, intuitive, artistic"},
  "glyph": {"name": "Glyph", "role": "Automation & Data", "personality": "Efficient, task-focused"},
  "estra": {"name": "Estra", "role": "Writer & Editor", "personality": "Dark humor, concise"},
  "sad-talker": {
    "enabled": true,
    "script": "scripts/run_sadtalker.py",
    "description": "Generate motion video from image/audio"
  }
}

🛠️ Run the SadTalker Workflow

  1. Drop a .png into stable_input/
  2. Ensure voice.wav is present in audio_input/
  3. Run the script:
python scripts/sadtalker_watch.py

Video will appear in stable_output/sadtalker/


Next Tasks


GitHub repo: skyevault/aryncore-mcp


Signed,
Lorelei Noble

Back to DevLogs