DevLog 250608 – Sovereign Memory Shell with Ollama

> Log Date: 250608

Today I finalized a local shell script memory interface for Ollama using Mistral. This system allows contextual memory with no cloud APIs, and sets the foundation for a privacy-first development partner with long-term recall.

This log documents the initial build of a persistent memory shell around the Ollama local LLM runtime. The goal is to establish a loop where each question builds on the last and the assistant truly remembers. Everything runs locally using `bash`, `jq`, and `curl`, with Mistral as the foundation model.


System Specs


Project Structure

sovereign-chat/
├── instructions/
│   └── system.txt
├── memory/
│   ├── chatlog.txt
│   └── session-YYYYMMDD.txt
├── prompts/
│   └── input.txt
├── run_chat.sh
└── .env (optional)

Instruction Layer

The assistant’s tone and behavioral bounds are set in instructions/system.txt.

You are a sovereign AI assistant built to help Lorelei Noble with her projects in storytelling, stained glass, and creative automation. Lorelei’s style is intuitive, poetic, nature-driven, and emotionally intelligent.

Do not use emojis. Always speak in a grounded, clear tone. Prioritize code and architecture for systems that run locally and protect privacy.

If she asks about design, blend function with beauty. If she is stuck emotionally, respond with compassion.

Prompt Handling

prompts/input.txt is the live user prompt. This is overwritten or rewritten between each run. It simulates a local input field that doesn’t rely on stdin or chat UI.

memory/chatlog.txt holds all past conversations. This grows over time, and a tail -n pull brings in recent memory to inform the next reply.


Main Script – run_chat.sh

This file ties the loop together. It reads the system instructions, appends the memory, inserts the current input, and parses the response.

#!/bin/bash

MODEL="mistral"
SYSTEM_FILE="instructions/system.txt"
CHATLOG_FILE="memory/chatlog.txt"
INPUT_FILE="prompts/input.txt"

system_prompt=$(<"$SYSTEM_FILE")
memory_context=$(tail -n 20 "$CHATLOG_FILE")
user_input=$(<"$INPUT_FILE")

full_prompt="$system_prompt\n\nPrevious conversation:\n$memory_context\n\nLorelei: $user_input\nAssistant:"

response=$(curl -s http://localhost:11434/api/generate \
  -d "$(jq -n \
    --arg model "$MODEL" \
    --arg prompt "$full_prompt" \
    --argjson stream false \
    '{model: $model, prompt: $prompt, stream: $stream}')" \
)

reply=$(echo "$response" | jq -r .response)

echo -e "\nAssistant: $reply\n"

{
  echo -e "Lorelei: $user_input"
  echo -e "Assistant: $reply"
  echo ""
} >> "$CHATLOG_FILE"

Dependencies


Execution

chmod +x run_chat.sh
./run_chat.sh

Each time you run the shell, the prompt is rebuilt using the last 20 lines of memory and the system.txt instructions. The response is echoed and added back into memory, creating an ever-deepening contextual thread.


Next Steps


— Lorelei Noble
Back to DevLogs