The focus this week was tight and productive: getting SadTalker integrated with the GPU watch pipeline and updating the persona system within the MCP architecture. These updates push the system toward more expressive multi-agent responses.
I finalized the folder structure and moved all animated render output into a dedicated flow. SadTalker now listens for both image and voice input, generates animated heads, and drops video output into the appropriate directory. I also formalized the personalities within the MCP via a clear persona_map.json
config.
models/sad-talker
run_sadtalker.py
to trigger video generation from sourcestable_input/
and pair with audio_input/voice.wav
stable_output/sadtalker/
models/sad-talker/
scripts/run_sadtalker.py
scripts/sadtalker_watch.py
triggers/gpu_watch/
├── stable_input/
├── audio_input/
└── stable_output/
└── sadtalker/
Updated to reflect distinct roles and script paths for each bot:
{
"central": {"name": "Aryn", "role": "MCP Interface and Coordinator", "personality": "Friendly, energetic, helpful"},
"doc": {"name": "Doc", "role": "Technical Architect", "personality": "Doctoral, structured, calm"},
"kona": {"name": "Kona", "role": "Creative AI Assistant", "personality": "Warm, intuitive, artistic"},
"glyph": {"name": "Glyph", "role": "Automation & Data", "personality": "Efficient, task-focused"},
"estra": {"name": "Estra", "role": "Writer & Editor", "personality": "Dark humor, concise"},
"sad-talker": {
"enabled": true,
"script": "scripts/run_sadtalker.py",
"description": "Generate motion video from image/audio"
}
}
.png
into stable_input/
voice.wav
is present in audio_input/
python scripts/sadtalker_watch.py
Video will appear in stable_output/sadtalker/
GitHub repo: skyevault/aryncore-mcp
Signed,
Lorelei Noble