Foundation
The Foundation module sets up the base configuration for all other modules. You'll configure your Obsidian vault structure and connect your preferred AI provider.
Prerequisites
- Obsidian installed and a vault created
- Python 3.11+ installed
- ~15 minutes to complete
What You'll Get
- Organized folder structure for meetings, transcripts, and templates
- Central configuration file for all scripts
- AI provider connected and tested
- Ready to add additional modules
Vault Structure
Create these folders in your Obsidian vault:
YourVault/
├── Meetings/ # Processed meeting notes
├── Transcripts/ # Archived raw transcripts
├── People/ # Person notes (@ First Last.md)
├── Templates/ # Tag reference and templates
└── scripts/ # Processing scripts AI Provider Setup
Choose one of the following AI providers. Each has different trade-offs:
Ollama (Local - Maximum Privacy)
Best for: Users who want all data to stay on their machine.
- Download Ollama from ollama.ai
- Install and start Ollama
- Pull a model:
ollama pull llama3.2 - Verify it's running:
curl http://localhost:11434/api/tags
# config.yaml
ai_provider: ollama
ai_model: llama3.2
ai_endpoint: http://localhost:11434 OpenAI API (Easiest Setup)
Best for: Users who want quick setup and reliable performance.
- Get an API key from OpenAI Platform
- Set your API key as an environment variable
# Set in your shell profile (.zshrc, .bashrc)
export OPENAI_API_KEY="sk-..." # config.yaml
ai_provider: openai
ai_model: gpt-4o-mini
ai_endpoint: https://api.openai.com/v1 Anthropic Claude API
Best for: Users who prefer Claude's writing style.
- Get an API key from Anthropic Console
- Set your API key as an environment variable
# Set in your shell profile
export ANTHROPIC_API_KEY="sk-ant-..." # config.yaml
ai_provider: anthropic
ai_model: claude-3-5-sonnet-20241022
ai_endpoint: https://api.anthropic.com Google Vertex AI (Enterprise)
Best for: Enterprise users with Google Cloud access.
- Set up a Google Cloud project with Vertex AI enabled
- Configure authentication with gcloud CLI
# config.yaml
ai_provider: vertex
ai_model: gemini-1.5-flash
ai_endpoint: us-central1-aiplatform.googleapis.com
gcp_project: your-project-id Configuration File
Create config.yaml in your scripts folder:
# config.yaml - AI Executive Assistant Configuration
# Vault paths (relative to vault root)
vault_path: ~/Documents/MyVault
meetings_folder: Meetings
transcripts_folder: Transcripts
people_folder: People
templates_folder: Templates
# AI Provider (ollama, openai, anthropic, vertex)
ai_provider: ollama
ai_model: llama3.2
ai_endpoint: http://localhost:11434
# Processing preferences
summary_style: concise # concise or detailed
action_item_format: "- [ ] @Owner — Task"
date_format: "%Y-%m-%d" Verify It Works
Test your AI provider connection:
python scripts/test_connection.py Expected output:
✓ Config loaded successfully
✓ AI provider connected (ollama)
✓ Test prompt completed in 1.2s
Ready to process meetings! Troubleshooting
Ollama not responding
Make sure Ollama is running: ollama serve
API key not found
Verify your environment variable is set: echo $OPENAI_API_KEY
Python module not found
Install dependencies: pip install -r requirements.txt
Next Steps
Now that your foundation is set up, continue to Meeting Processing to enable AI-powered summaries.