ContextPulse Sight — now available

Your AI can write code.
It can't see your screen.

ContextPulse captures your desktop continuously and serves it to any MCP-compatible AI agent. Install in 30 seconds. No cloud, no API keys, no cost.

$ pip install contextpulse-sight

Every AI conversation starts blind

Your AI assistant can refactor your codebase but has no idea what's on your screen. You're the bottleneck.

📸

Manual screenshots

Snip Tool → paste → wait for AI to process. Every time. For every question. It breaks your flow.

🔄

Context amnesia

New session, blank slate. Developers spend ~5 hours/week re-explaining context their AI should already have.

🔌

Agent silos

Claude Code can't see what Cursor shows. Gemini doesn't know what Copilot just did. Your tools don't share context.

<3ms
Per capture
<1%
CPU usage
0
Cloud dependencies
118
Tests passing
7
MCP tools
4
Storage modes

Composable context, one package at a time

Install what you need. Each package works standalone. Together, they compound.

Available now

ContextPulse Sight

Runs in your system tray. Captures your screen every 5 seconds with change detection. Serves every frame to any MCP-compatible AI agent via 7 built-in tools.

  • 4 capture modes: active monitor, all monitors, cursor region, auto-timer
  • Built-in OCR — extract text from any screen without sending an image
  • Activity database with full-text search across window titles and OCR text
  • Smart storage: text-only when text-heavy, saves 59% disk
  • Window blocklist and auto-pause on screen lock
# In your AI agent (Claude Code, Cursor, etc.)
# Just ask naturally:

"What's on my screen right now?"

# Or via MCP tool call:
get_screenshot(mode="active")
get_screen_text()
get_recent(count=5)
Coming soon

ContextPulse Memory

Cross-session persistent memory so your AI agents don't start from zero. Decisions, preferences, and project context that accumulates over time and is shared across all your AI tools.

  • Shared memory across Claude Code, Cursor, Gemini CLI, and more
  • Outcome-based learning — what worked, what didn't
  • SQLite-backed, local-first, portable
# Agent A learns something at 2pm
memory.store({
  "fact": "User prefers pytest over unittest",
  "source": "claude-code",
  "confidence": 0.95
})

# Agent B knows it at 3pm
memory.recall("testing preferences")
# → "User prefers pytest over unittest"
Coming soon

ContextPulse Agent

Multi-agent coordination so your AI tools work as a team, not isolated silos. Session protocols, handoff patterns, and shared state between any MCP-compatible agent.

  • Agent-to-agent session handoff
  • Shared project state across tools
  • Conflict detection when agents edit the same files
# Morning: Claude Code refactors auth
agent.log("Refactored auth middleware",
  files=["auth.py", "middleware.py"])

# Afternoon: Cursor picks up where it left off
agent.context()
# → "auth.py was refactored today by
#    claude-code. Review before editing."

How much is manual screenshotting costing you?

Most developers don't realize how much time they lose. Find out in 30 seconds.

30
screenshots/day
1.9
hours lost/week
$7,313
annual cost

With ContextPulse: 0 manual screenshots. 1.9 hours/week back.

Get ContextPulse free

Three commands. That's it.

1

Install

pip install contextpulse-sight
Python 3.10+. No GPU. No cloud account. No API key.

2

Run

contextpulse-sight
Starts in your system tray. Auto-captures every 5 seconds. Invisible.

3

Connect

Add the MCP server to Claude Code, Cursor, or any MCP-compatible tool. Your AI can now see your screen.

Your screen stays on your machine

ContextPulse runs 100% locally. No cloud. No accounts. No telemetry. No data ever leaves your computer.

🔒

100% local

All captures stay on disk. No network calls. Works offline.

🚫

Window blocklist

Automatically skip sensitive apps — banking, password managers, private browsing.

⏸️

Auto-pause on lock

Captures stop instantly when you lock your screen or switch users.

👁️

Open source

Every line of code is auditable. No hidden data collection. No surprises.

From install to first capture in 30 seconds

Demo GIF coming soon

The only always-on context service for MCP agents

ContextPulse MCP Screenshot Tools Screenpipe
Always-on daemon ✓ Auto every 5s ✗ On-demand only ✓ Continuous
Activity search ✓ FTS5 (titles + OCR)
Time-travel context ✓ get_context_at()
Smart storage ✓ 4 modes, -59% disk ✗ Full recording
MCP tools 7 (capture, OCR, search) 1-2 (capture only) Add-on
CPU / RAM <1% / <20 MB 0% (idle) 5-15% / 200-500 MB
Price Free Free $400

Start free. Scale when you need to.

Sight is free and open source. Memory and Agent packages coming soon.

Sight
Free
Forever. Open source.
  • Always-on screen capture
  • 7 MCP tools
  • Built-in OCR
  • Privacy controls
  • 4 storage modes
Memory Starter
$29
One-time purchase
  • Everything in Sight
  • Cross-session memory
  • Shared across agents
  • SQLite-backed, portable
Coming soon
Memory Pro
$49
One-time purchase
  • Everything in Starter
  • Outcome-based learning
  • Agent coordination
  • Priority support
Coming soon

Common questions

Does this slow down my machine?+

Under 1% CPU and under 20MB RAM. The daemon uses mss for capture (3ms per frame) and only stores frames that changed. You won't notice it running.

What about sensitive information on my screen?+

Window blocklist lets you skip specific apps (banking, password managers). Auto-pause on lock screen. Everything stays local -- no cloud, no telemetry.

Does it work with Cursor / Copilot / my tool?+

Any MCP-compatible tool. That includes Claude Code, Cursor, Gemini CLI, and any custom agent using the MCP SDK.

How is this different from Screenpipe?+

Screenpipe records everything you see, say, and hear -- full video and audio. ContextPulse captures only what your AI agents need: screen context. That's why it uses under 1% CPU vs 5-15%, under 20MB RAM vs 200-500MB. It's also free.

Is this Windows-only?+

Windows-first, but the capture library (mss) is cross-platform. macOS and Linux support is planned.

Stop screenshotting. Start building.

Free, open source, and local-first. Your AI agent sees what you see in 30 seconds.