Stack Junkie
Published on
· 13 min read

How I Built a 24/7 Memory Dashboard for My AI Agent in 48 Hours

Authors

How I Built a 24/7 Memory Dashboard for My AI Agent in 48 Hours

Intro

My AI agent runs 24/7. It writes blog posts, manages tasks, monitors systems, and handles cron jobs. The problem? It forgets everything between sessions. No memory of yesterday's decisions. No awareness of pending tasks. Context windows fill up fast. I needed persistent state.

I built Lantern. A Next.js dashboard with JSON storage and REST APIs. The agent writes to it. I read from it. Here is how it works.

TLDR

Built a Next.js dashboard (Lantern) for my 24/7 AI agent. JSON backend, REST APIs, kanban board, activity feed, file explorer, and terminal access. Real implementation details included.

Table of Contents

The Problem

I run an AI agent 24/7. It handles cron jobs, writes blog posts, manages tasks, and monitors various systems. The problem? Every new conversation starts fresh. The agent has no memory of what happened yesterday, what tasks are pending, or what decisions were made.

Context windows help, but they fill up fast. Stuffing everything into the system prompt creates a mess. Relying on the agent to remember to check files? That fails silently.

I needed something persistent. Something the agent could write to and read from without human intervention. A dashboard.

What I Built (Lantern)

A Next.js dashboard with a JSON backend. No database. No complexity. Just a React frontend with API routes that read and write to JSON files.

I call it Lantern. The name comes from the idea of lighting a path through the agent's activity. The dashboard shows what the agent is doing, what it needs to do, and what went wrong.

The main interface has a sidebar with SVG icons for navigation. The Command Center tab is mission control with multiple widgets. There is a kanban board with drag and drop. A live activity stream shows recent agent actions. The file explorer includes a markdown previewer. There is even a built-in terminal.

Key features:

  • Command Center with QuickColumn and ChrisTodo widgets
  • Kanban board with multiple columns (todo, in progress, blocked, done)
  • Live activity feed showing recent agent actions
  • File explorer with markdown preview
  • Terminal for direct shell access through the browser
  • Action log with chronological record of agent decisions and outcomes
  • Notes panel with persistent markdown the agent can update

The agent accesses everything through REST APIs. It can create tasks, log actions, add inbox items, and update notes without me touching anything.

The tech stack

Here is what I actually used:

  • Next.js 16 with App Router
  • Tailwind CSS v4 via @tailwindcss/postcss
  • TypeScript
  • PM2 for process management
  • nginx as reverse proxy
  • Let's Encrypt for SSL (via certbot)
  • @dnd-kit for drag and drop in the kanban
  • marked for markdown rendering
  • node-pty + xterm.js for the terminal

The choice of Next.js 16 gave me the App Router with built-in API routes. Tailwind CSS v4 is faster than v3 and requires less configuration. TypeScript catches bugs before deployment.

PM2 keeps the Node process alive. If it crashes, PM2 restarts it automatically. nginx handles SSL termination and forwards requests to localhost:3001 where the Next.js server runs.

File structure:

lantern/
├── app/
│   ├── src/
│   │   ├── app/
│   │   │   ├── api/
│   │   │   │   ├── tasks/route.ts
│   │   │   │   ├── inbox/route.ts
│   │   │   │   ├── log/route.ts
│   │   │   │   ├── notes/route.ts
│   │   │   │   ├── status/route.ts
│   │   │   │   └── [18 more endpoints]
│   │   │   ├── page.tsx
│   │   │   └── globals.css
│   │   ├── components/
│   │   │   ├── KanbanBoard.tsx
│   │   │   ├── LiveActivity.tsx
│   │   │   ├── FileExplorer.tsx
│   │   │   ├── Terminal.tsx
│   │   │   ├── ChrisTodo.tsx
│   │   │   └── QuickColumn.tsx
│   │   └── lib/
│   │       ├── tasks.ts
│   │       └── inbox.ts
│   ├── package.json
│   └── terminal-server.js
├── data/
│   ├── tasks.json
│   ├── inbox.json
│   ├── log.json
│   └── notes.md
└── pm2.config.js

The separation between /app and /data keeps code separate from state. The JSON files live in /data. The Next.js app reads and writes them through the API layer.

API Design

Every endpoint follows the same pattern. POST with an action field and payload.

Example task creation:

curl -X POST http://localhost:3001/api/tasks \
  -H "X-API-Key: your_key_here" \
  -H "Content-Type: application/json" \
  -d '{
    "action": "add",
    "title": "Research affiliate programs",
    "description": "Find programs for AI tools we recommend",
    "priority": "high",
    "assignee": "agent"
  }'

Task update:

curl -X POST http://localhost:3001/api/tasks \
  -H "X-API-Key: your_key_here" \
  -H "Content-Type: application/json" \
  -d '{
    "action": "update",
    "id": "task_123",
    "updates": {
      "status": "done",
      "description": "COMPLETE: See memory/affiliate-research.md"
    }
  }'

Adding an inbox item (quick capture):

curl -X POST http://localhost:3001/api/inbox \
  -H "X-API-Key: your_key_here" \
  -H "Content-Type: application/json" \
  -d '{
    "source": "system",
    "rawContent": "Deploy failed on stack-junkie. Check build logs."
  }'

Logging an action:

curl -X POST http://localhost:3001/api/log \
  -H "X-API-Key: your_key_here" \
  -H "Content-Type: application/json" \
  -d '{
    "action": "Published blog post: building-a-dashboard",
    "outcome": "success",
    "details": "QA score: 94. Live at midnight-build.com/blog/building-a-dashboard"
  }'

The API returns JSON with status codes. 200 for success, 401 for auth failures, 500 for server errors. Simple and predictable.

The agent treats these API calls the same as any other tool. When it finishes a task, it updates the dashboard. When something breaks, it logs the failure and creates an inbox item for me to review.

Authentication without cookies

The tricky part was authentication. Browser sessions use cookies. API calls from the agent use headers. I needed both to work.

The middleware checks for either:

// middleware.ts
import { NextRequest, NextResponse } from 'next/server'

const API_KEY = process.env.API_KEY
const PASSWORD = process.env.PASSWORD
const COOKIE_NAME = 'lantern_auth'

export function middleware(request: NextRequest) {
  const { pathname } = request.nextUrl

  if (pathname.startsWith('/api/')) {
    const apiKey = request.headers.get('x-api-key')
    const authCookie = request.cookies.get(COOKIE_NAME)

    if (apiKey === API_KEY || authCookie?.value === PASSWORD) {
      return NextResponse.next()
    }

    return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
  }

  // Browser auth redirect for non-API routes
  const authCookie = request.cookies.get(COOKIE_NAME)
  if (!authCookie || authCookie.value !== PASSWORD) {
    return NextResponse.redirect(new URL('/login', request.url))
  }

  return NextResponse.next()
}

Simple but effective. The agent passes X-API-Key in every request. I log in through the browser normally. One middleware handles both.

The heartbeat pattern

The most useful part is the heartbeat. Every hour, the agent checks in:

  1. GET /api/tasks - Are tasks accurate?
  2. GET /api/inbox - Any items to convert to tasks?
  3. Review enabled cron jobs - Did they run on schedule?
  4. POST /api/status - Report current activity

If everything is fine, it responds HEARTBEAT_OK and stays quiet. If something needs attention, it creates an inbox item and notifies me.

This passive monitoring catches problems I would otherwise miss. A failed cron job shows up within an hour. A blocked task gets escalated automatically. The agent does not need to be told to check things. The heartbeat is part of its core loop.

Example heartbeat implementation (pseudocode for your agent):

EVERY_HOUR:
  tasks = GET /api/tasks
  inbox = GET /api/inbox

  IF any task blocked for > 24h:
    CREATE inbox item: "Task {id} blocked for {hours}h: {blockedBy}"

  IF any inbox item > 7 days old:
    CREATE inbox item: "Old inbox items need triage"

  IF all good:
    RETURN "HEARTBEAT_OK"

The heartbeat keeps the dashboard synced without manual intervention. The agent knows what needs attention. I see blockers before they cause problems.

What Changed

Before the dashboard:

  • Agent forgot context between sessions
  • No visibility into what the agent actually did
  • Failures went unnoticed until I manually checked logs
  • Task tracking happened in markdown files the agent sometimes forgot to update

After:

  • Persistent task tracking across sessions
  • Complete action log with timestamps and outcomes
  • Automatic escalation of blockers to the inbox
  • Visual overview of system state
  • Real-time activity feed
  • Task status updates as work progresses

The agent now maintains its own todo list. It updates task status as work progresses. It logs every significant action. When something fails, I see it in the inbox within minutes.

The Command Center gives me a single place to check system health. The kanban board shows what is in progress versus what is blocked. The live activity feed shows recent agent decisions. The file explorer lets me browse workspace files without SSHing into the server.

Deployment Setup

Running in production requires a few steps.

PM2 configuration:

// pm2.config.js
module.exports = {
  apps: [
    {
      name: 'lantern',
      cwd: '/path/to/lantern/app',
      script: 'npm',
      args: 'start',
      env: {
        NODE_ENV: 'production',
        PORT: 3001,
        API_KEY: 'your_secure_key',
        PASSWORD: 'your_secure_password',
      },
      instances: 1,
      exec_mode: 'fork',
      autorestart: true,
      watch: false,
      max_restarts: 10,
      min_uptime: '10s',
    },
  ],
}

Start with PM2:

pm2 start pm2.config.js
pm2 save
pm2 startup

nginx configuration:

server {
    listen 80;
    server_name lantern.yourdomain.com;
    return 301 https://$server_name$request_uri;
}

server {
    listen 443 ssl http2;
    server_name lantern.yourdomain.com;

    ssl_certificate /etc/letsencrypt/live/lantern.yourdomain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/lantern.yourdomain.com/privkey.pem;

    location / {
        proxy_pass http://localhost:3001;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Enable the site:

sudo ln -s /etc/nginx/sites-available/lantern.yourdomain.com /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx

SSL certificate:

sudo certbot --nginx -d lantern.yourdomain.com

Certbot automatically configures nginx and sets up auto-renewal. The certificate renews every 90 days without manual intervention.

Verify everything:

pm2 list
sudo systemctl status nginx
curl -H "X-API-Key: your_key" https://lantern.yourdomain.com/api/tasks

If PM2 shows "online" and the curl command returns JSON, the deployment is working. Access the dashboard at https://lantern.yourdomain.com.

Start building your own

You do not need to replicate my exact setup. Start simple and add features as you need them.

Minimal starting point:

  1. JSON files for storage (tasks.json, log.json, inbox.json)
  2. Basic CRUD API (add, update, delete, list)
  3. Simple frontend (even a static HTML page works)
  4. Header-based auth for API access

AI prompt to build your own dashboard:

Copy this prompt and paste it into your AI agent to start building:

Build me a Next.js dashboard for persistent AI agent state management.

Requirements:
- Next.js 16 with App Router
- Tailwind CSS for styling
- TypeScript
- JSON files for storage (no database)
- Three core entities: Tasks, Inbox, Log
- REST API with these endpoints:
  - POST /api/tasks - CRUD operations for tasks
  - POST /api/inbox - Quick capture for triage items
  - POST /api/log - Action logging with timestamps
  - GET /api/status - System health check
- Middleware auth that accepts both cookie (browser) and X-API-Key header (API)
- Simple tabs: Tasks (kanban), Inbox (list), Log (timeline), Status (dashboard)
- Mobile responsive
- Dark theme

Tasks schema:
- id (auto-generated)
- title
- description
- status (todo, in_progress, blocked, done)
- priority (low, medium, high)
- assignee (agent, human)
- blockedBy (string, optional)
- createdAt, updatedAt

Inbox schema:
- id (auto-generated)
- source (system, user, cron)
- rawContent
- status (pending, converted, dismissed)
- createdAt

Log schema:
- id (auto-generated)
- action
- outcome (success, failure, partial)
- details
- timestamp

Give me:
1. Complete file structure
2. All TypeScript files with full implementation
3. package.json with dependencies
4. Basic styling with Tailwind
5. Deployment instructions for PM2 + nginx

Make it production-ready but keep it simple. No unnecessary abstractions.

This prompt gives your AI enough structure to build a working dashboard in one session. Adjust the schema based on what you actually need to track.

What to keep simple:

  • Storage: JSON files work fine until you have concurrency issues
  • Auth: Header-based API keys are enough for single-agent setups
  • UI: Basic components with Tailwind. Skip animations and polish initially
  • API: RESTful CRUD with POST actions. No GraphQL or complex query layers

When to add complexity:

  • Switch to SQLite when you hit concurrent write issues with JSON
  • Add webhook notifications when you need real-time alerts (Telegram, Slack)
  • Implement task dependencies when your workflows require sequential execution
  • Add archival when your JSON files grow past 10MB
  • Use Redis for caching if API response times exceed 200ms

The simple version works. I have been running Lantern for weeks without issues. The agent updates it constantly. I check it once or twice a day. The architecture is more valuable than the polish.

FAQ

Why not use a database instead of JSON files?

JSON works fine for low-volume agent updates. If you have multiple agents writing simultaneously or datasets over 10MB, switch to SQLite. For a single agent making a few updates per hour, JSON is simpler.

Can this work with Claude Desktop or ChatGPT plugins?

Yes, if your agent can make HTTP requests. The API layer works with any client that can send authenticated requests. Claude Desktop, GPT plugins, custom Python scripts, or even curl work the same way.

What is the performance like with PM2 and nginx?

Negligible overhead. Next.js handles the frontend rendering. nginx handles SSL termination and reverse proxying. PM2 just keeps the process alive. The entire stack uses less than 100MB of RAM.

Do I need a VPS for this?

Only if you want remote access. You can run this locally on localhost and skip nginx/SSL entirely. Use npm run dev and access it at http://localhost:3000. API calls work the same way with X-API-Key headers.

How do I back up the JSON files?

Simple cron job that copies /data to a backup location. I run rsync -av /path/to/lantern/data /backup/lantern/$(date +%Y-%m-%d) daily at 2am. Keeps 30 days of history. Takes 10 seconds.

What if the agent forgets to update the dashboard?

Build the API calls into your agent's core tools. Make task updates non-optional. If the agent completes a task, the last step is always "update dashboard status to done". Treat dashboard updates the same as file writes or API calls.

Sources

  1. Next.js App Router Documentation - Framework reference for API routes and middleware
  2. PM2 Documentation - Process manager setup and configuration
  3. Tailwind CSS v4 Release Notes - New features and migration guide
  4. dnd-kit GitHub - Drag and drop library for the kanban board
  5. Let's Encrypt Documentation - SSL certificate automation

Conclusion

Start simple. JSON backend, basic API, minimal UI. Add complexity when you hit limits, not before.

The dashboard architecture is more valuable than the polish. Persistent state matters more than animations. Reliable API access matters more than a beautiful frontend.

Your AI agent will use this every hour. You will check it once a day. Design for the agent first, yourself second.

If you build your own version, start with the AI prompt above. Get it working locally first. Deploy to production only when you need remote access. Keep the JSON files backed up.

The implementation took me a weekend. Most of that time went into UI polish I could have skipped. The core functionality (API + JSON storage) took four hours. The value showed up immediately.

Running AI agents in production teaches you things documentation will not. This is one of them.

Enjoyed this post?

Get new articles delivered to your inbox. No spam, unsubscribe anytime.

Comments