Spent today advancing the MCP infrastructure by installing Automatic1111, confirming Ollama model access, and refining persona response behavior with Estra.
I made strong forward movement today on the MCP stack. A new repo for aryncore-mcp
has been established to contain modular tools, handler scripts, and persona logic. This repo is now the backbone for my orchestrated LLM system. Ollama is functioning, models are available, and Stable Diffusion via Automatic1111 is in progress.
Ran the main entrypoint:
python3 -m backend.mcp_orchestrator
Confirmed Ollama is running and listening at
. Queried models using:
curl http://localhost:11434/api/tags
Models available include mistral
, llama3:8b-instruct
, and codellama
. Received LLM error during persona interaction due to a transient connection issue. Verified it was not firewall-related (UFW inactive).
Attempted to forward Ollama’s port over SSH:
ssh -L 11434:localhost:11434 user@remote
Port already in use locally. Confirmed status:
ss -tuln | grep 11434
Estra now prompts the user with a mission-oriented questionnaire. Planning to introduce a persistent memory prefix for each response. Will prototype using hardcoded headers, then migrate to a dynamic prefix injection per LLM response.
No previous installation found. Followed setup instructions:
sudo apt install wget git python3 python3-venv libgl1 libglib2.0-0
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui
cd stable-diffusion-webui
mkdir -p models/Stable-diffusion/
bash webui.sh
For LAN access, use:
COMMANDLINE_ARGS="--listen" bash webui.sh
Repository: skyevault/aryncore-mcp
> Written and deployed by Lorelei Noble