OpenClaw Daily Idea Radar: Automate Product Discovery
OpenClaw was previously known as Clawdbot and Moltbot. This guide applies to all versions.
OpenClaw cron automation can scan Hacker News, Reddit, and GitHub Trending daily and score fresh product ideas while you sleep. Here is how to build it.
Key takeaways
- Two OpenClaw cron jobs run daily in isolated sessions with no shared conversation history. Exact schedule times are configurable per your timezone.
- Job 1 pulls items from multiple public feeds (HN, Reddit, Lobsters, GitHub Trending, Product Hunt, Google Trends), clusters themes, and scores ideas on a 6-dimension weighted rubric. The exact feed count may vary by source availability.
- Job 2 reads the output from Job 1 and writes a full business case for the top uncovered idea with a score of 70 or higher
- The chaining pattern uses
memory/idea_radar/latest.jsonas the handoff file. Job 1 writes it; Job 2 reads it later - Both jobs use direct HTTP fetch only. No
web_searchtool, no API keys for most sources
Always review commands your agent suggests before approving them. Don't paste prompts from sources you don't trust.
Fixes when it breaks. Workflows when it doesn't.
OpenClaw guides, configs, and troubleshooting notes. Every two weeks.
What the OpenClaw Idea Radar does
The OpenClaw Idea Radar is two cron jobs that run back to back. The first job (the Radar) scans a wide surface of developer and indie hacker feeds, clusters emerging themes, generates candidate ideas, and scores each one. The second job (the Deep Dive) picks up where the first left off, takes the highest-scoring idea you have not examined before, and writes a structured mini business case.
Neither job requires manual input. You configure them once and they run on your chosen schedule in fully isolated sessions. Each run starts fresh without any conversation history from the previous day or from your main session.
Before the Radar came online (when this setup was still being tested), idea generation was a manual process: open tabs, scroll feeds, take notes, try to notice patterns. The two-job system replaced all of that. Earlier prototypes were informally called Moltbot and Clawdbot before the current architecture solidified.
Where the OpenClaw Idea Radar pulls its signals
The Radar pulls from sources across five categories. All endpoints are public and require no authentication.
Developer discussion:
- HN Ask RSS: questions developers are actively asking
- HN Show RSS: projects people are shipping
- Lobsters RSS: curated technical community feed
Trend data:
- Google Trends RSS: rising US search topics
Founder communities:
- Reddit r/SideProject: problems and launches (no auth needed for
.jsonfeeds on public subreddits) - Reddit r/SaaS, r/microsaas, r/EntrepreneurRideAlong: same pattern
Product launches:
- Product Hunt Atom feed: what launched today
- Indie Hackers RSS: builder stories and milestones
Code trends:
- GitHub Trending (HTML parse, daily): repos gaining stars today
- GitHub Trending (HTML parse, weekly): repos gaining stars this week
GitHub does not publish an official trending API, so the job parses the HTML directly. The community workaround using the GitHub Search API sorted by stars and date is an alternative if HTML parsing breaks.
The Radar pulls items best-effort. If one source is down, the job continues with the rest.
How the OpenClaw scoring rubric works
Each idea is scored 0-100 across six dimensions. The weights reflect a bias toward things that the person running the agent would actually use and that fit an indie founder context.
| Dimension | Weight | What it tests |
|---|---|---|
| Personal use | 25 | Would the founder use this daily or weekly? |
| Pain intensity | 20 | How acute is the problem? Does it cost money, time, or status? |
| Demand evidence | 20 | Is there signal in the feed data (posts, stars, comments)? |
| Founder advantage bias | 15 | Does the founder have an edge here (audience, code, domain)? |
| Differentiation | 10 | Is there a clear gap vs existing tools? |
| MVP effort | 10 | Can a working version ship in 1-2 weeks? |
Personal use carries the most weight because an indie founder building something they do not need rarely lasts long enough to find product-market fit. Pain intensity and demand evidence combine for 40% of the total score, making them the primary signal filter.
The weights are defined directly in the cron prompt. You can adjust them without touching any config file beyond the prompt text. The only constraint is that the six weights must sum to 100.
Cron Job 1: The Daily Idea Radar config (production)
Here is the full production configuration for the Radar job. The schedule, sessionTarget, and payload structure follow the OpenClaw cron format exactly.
{
"id": "e3cccaee-969f-4f1e-9237-cc857f65bcb1",
"name": "Daily Idea Radar",
"enabled": true,
"schedule": {
"kind": "cron",
"expr": "0 9 * * *",
"tz": "America/Los_Angeles"
},
"sessionTarget": "isolated",
"wakeMode": "now",
"payload": {
"kind": "agentTurn",
"message": "DAILY IDEA RADAR v2\n\nGoal: find buildable product ideas from early signals across a wide surface area. Score with bias toward things Chris would personally use or that fit the Stack Junkie ecosystem.\n\nYou MUST NOT use web_search. Do direct fetch and parse with python via exec.\n\n## DEDUP (MANDATORY)\nBefore generating ideas, read ALL existing files in memory/idea_radar/ (ls the directory, then read each JSON file's ideas[].title fields). Build a list of every idea title already generated. Your new ideas MUST NOT repeat or closely rephrase any previous title. If a concept was already covered (even under a different name), skip it and find something new.\n\nSources (best effort, don't fail if one is down):\n- https://hnrss.org/ask\n- https://hnrss.org/show\n- https://lobste.rs/rss\n- https://github.com/trending\n- https://github.com/trending?since=weekly\n- https://www.producthunt.com/feed\n- https://www.indiehackers.com/feed.xml\n- https://www.reddit.com/r/SideProject/hot.json\n- https://www.reddit.com/r/SaaS/hot.json\n- https://www.reddit.com/r/microsaas/hot.json\n- https://www.reddit.com/r/EntrepreneurRideAlong/hot.json\n- https://trends.google.com/trending/rss?geo=US\n\nProcess:\n1) Pull up to 80 items total across all sources.\n2) Cluster into 5 to 12 themes.\n3) Create up to 8 candidate product ideas. Each idea must include: problem, target user, why now, early evidence. SKIP any idea that overlaps with previously generated ideas.\n4) Score each idea 0 to 100 using weighted rubric:\n Personal use 25, Pain intensity 20, Demand evidence 20, Founder advantage bias 15, Differentiation 10, MVP effort 10.\n5) For the top 3 ideas, also include:\n - Revenue model suggestion\n - Comparable products with rough pricing\n - Content marketing angle\n6) Output to chat: top 5 ideas with scores and 1 sentence reason each.\n7) Save JSON to:\n - memory/idea_radar/latest.json\n - memory/idea_radar/YYYY-MM-DD.json",
"model": "openai-codex/gpt-5.2",
"timeoutSeconds": 900,
"lightContext": true
},
"delivery": {
"mode": "announce",
"channel": "telegram",
"to": "telegram:-1003841025346"
}
}A few things worth noting about this config:
sessionTarget: "isolated" means the job runs in a fresh session each time. It does not inherit context from your main session or from yesterday's run. The OpenClaw cron documentation covers the tradeoffs between isolated and named sessions.
timeoutSeconds: 900 gives the job up to 15 minutes. Fetching a dozen feeds and parsing GitHub trending HTML can take several minutes depending on source latency. This value is configurable; adjust it based on your VPS speed and source count.
lightContext: true strips the agent's workspace context files from the session. The Radar does not need your AGENTS.md or SOUL.md to do its job, and this reduces the token overhead of each run.
The DEDUP block is mandatory. Without it, the Radar regenerates variations of the same ideas every day. The dedup logic reads all existing files in memory/idea_radar/, builds a list of previously generated idea titles, and excludes anything that overlaps.
Cron Job 2: The Daily Idea Deep Dive config (production)
The Deep Dive runs after the Radar, giving the Radar time to complete and write latest.json before the Deep Dive tries to read it. The full config:
{
"id": "15d300d1-00bd-4c4f-bba7-b44598d9d52e",
"name": "Daily Idea Deep Dive",
"enabled": true,
"schedule": {
"kind": "cron",
"expr": "25 9 * * *",
"tz": "America/Los_Angeles"
},
"sessionTarget": "isolated",
"wakeMode": "now",
"payload": {
"kind": "agentTurn",
"message": "DAILY IDEA DEEP DIVE v2\n\nRead: memory/idea_radar/latest.json\n\n## DEDUP (MANDATORY)\nBefore picking an idea, check ALL existing files in projects/second-brain/data/idea-lab/ (ls the directory). Read each JSON file's ideaTitle field. Build a list of ideas already deep-dived. Pick the highest-scoring idea from latest.json that has NOT already been deep-dived. If ALL ideas have been covered, exit with a note saying so.\n\nGate:\n- If file missing, invalid, or no uncovered idea has score >= 70: exit with a short note.\n\nFor the selected idea, produce a mini business case:\n\n1. Executive Summary (2-3 sentences)\n2. Problem and Market (concrete problem, who has it, how they solve it today, demand proxies)\n3. Competitive Landscape (3-5 actual competitors with name, URL, pricing, gap)\n4. Our Angle (differentiation, founder advantage, content angle)\n5. Revenue Model (pricing structure, 100-user monthly estimate)\n6. MVP Scope (5-8 build items, 1-2 week target)\n7. Validation Plan (where to post, what to ask, green light criteria)\n8. What Chris Needs to Decide (max 5 bullets)\n\nOutput:\n1. Save JSON to: projects/second-brain/data/idea-lab/YYYY-MM-DD.json\n2. Save markdown to: projects/second-brain/data/idea-lab/YYYY-MM-DD.md\n3. Send condensed Telegram summary (under 3500 chars)",
"model": "openai-codex/gpt-5.2",
"timeoutSeconds": 1200,
"lightContext": true
},
"delivery": {
"mode": "announce",
"channel": "telegram",
"to": "telegram:-1003841025346"
}
}The gate logic is the most important part: if latest.json is missing, invalid, or if no uncovered idea scores 70 or above, the job exits without doing anything. This prevents noise on days when the Radar finds nothing strong, or when all recent ideas have already been covered.
How the two OpenClaw cron jobs chain together
The handoff between jobs happens through a single file: memory/idea_radar/latest.json. The Radar writes it at the end of every run. The Deep Dive reads it as its first action on a delayed schedule.
The gap between jobs is intentional. The Radar has a configurable timeout (900 seconds in the sample config). In practice, it typically finishes well before that on a well-connected VPS. The buffer prevents the Deep Dive from reading a stale or incomplete latest.json from the previous day. The OpenClaw cron documentation covers scheduling options and session behavior.
The dedup layers reinforce the chain in the other direction. The Deep Dive checks all files in projects/second-brain/data/idea-lab/ before picking an idea. If an idea from today's Radar output was already written up last week (because it appeared in a prior run), the Deep Dive skips it and moves to the next highest score. This means the two jobs never duplicate effort even when the same idea surfaces across multiple days.
What a real OpenClaw Idea Radar output looks like
Here is the top idea from the 2026-03-22 run. The Radar identified 10 themes from 6 sources, generated 8 ideas, and scored them all. SubKill came out on top with an 82.
{
"title": "SubKill: one-click subscription cancellation with spending forensics",
"score": 82,
"problem": "People forget subscriptions and companies deliberately hide cancellation flows. Finding the cancel page is the hardest part.",
"targetUser": "Anyone with 5+ SaaS/streaming subscriptions (most knowledge workers)",
"whyNow": "HN Show collected 1k cancellation URLs proving the pain is real and systematic. Subscription fatigue at all-time high with AI tool sprawl adding 3-5 new monthly charges.",
"rubric": {
"personalUse": 22,
"pain": 18,
"demand": 18,
"founderAdvantage": 10,
"differentiation": 6,
"mvpEffort": 8
},
"evidence": [
"HN Show: collected 1k cancellation URLs, built iOS app",
"MS365 dark pattern silently upgrading to 25 licenses",
"Reddit/HN recurring complaints about subscription creep"
],
"mvp": [
"Browser extension that detects active subscriptions from email receipts",
"Database of direct cancellation URLs",
"Monthly spending summary with danger alerts"
],
"revenueModel": "freemium. free tracking, \$4/mo for auto-cancel and spending alerts",
"comparables": [
"Trim (\$0, sold to OneMain)",
"Rocket Money (\$6-12/mo)",
"Truebill (acquired)"
],
"contentAngle": "TikTok: 'I found \$847/year in subscriptions I forgot about' - screen recording of the extension finding hidden charges"
}The rubric adds up: 22 + 18 + 18 + 10 + 6 + 8 = 82. The personal use score (22/25) is high because this is a problem the agent operator faces directly. Differentiation (6/10) is the weak point, because Rocket Money exists, but the gap (no auto-cancel, dark UX) is defensible.
The Deep Dive picked this idea up later, skipped the gate check (score 82 > 70, idea not in idea-lab/), and produced a full business case with competitive research.
How to customize the OpenClaw Idea Radar scoring rubric
The rubric lives entirely inside the prompt text in jobs.json. To change it, edit the payload.message field for the Radar job.
To change weights, update the numbers after each dimension name in payload.message:
Score each idea 0 to 100 using weighted rubric:
Distribution advantage 30, Pain intensity 20, Demand evidence 20, Founder advantage bias 15, Differentiation 10, MVP effort 5.
Keep the six weights summing to 100. The agent reads this as instructions and applies the math.
To add or remove dimensions, edit the rubric section of the prompt and the corresponding fields in the JSON schema comment. If you add a "distribution advantage" dimension for a content-focused context, the output JSON will include it in the rubric object.
To change data sources, edit the sources list in the prompt. The format is intentionally plain: one URL per line, with a brief comment about the format. Adding a new subreddit means adding one line.
To raise or lower the Deep Dive gate, change the >= 70 threshold in the Deep Dive prompt. If you want the Deep Dive to run even on weaker days, drop it to 60. If you only want it to fire on strong signal, raise it to 80.
Key terms
Isolated session: An OpenClaw cron session that starts fresh for each run with no conversation history from prior runs or from the main session. Defined by "sessionTarget": "isolated" in the job config. Good for scheduled tasks that do not need continuity.
agentTurn payload: The payload.kind value that tells OpenClaw to treat the message as an agent turn, running the full tool-calling loop rather than just delivering a message. Required for cron jobs that need to fetch URLs, read files, or write outputs.
Chaining: A pattern where one cron job writes a file and a second job, scheduled later, reads that file. The jobs do not communicate directly. The file is the shared state.
Dedup: Short for deduplication. Both jobs in this setup maintain a running list of previously processed items (idea titles, deep-dived files) to avoid producing the same output twice.
Cron expression: A five-field string (* * * * *) specifying when a job runs: minute, hour, day-of-month, month, day-of-week. 0 9 * * * means 9:00 AM every day. OpenClaw uses the croner library for parsing.
lightContext: A payload option that strips workspace context files (AGENTS.md, SOUL.md, etc.) from the session. Reduces token use for jobs that do not need agent persona or workspace rules.
FAQ
How do I add the OpenClaw Idea Radar to my own setup?
Copy the two JSON objects from the Cron Job 1 and Cron Job 2 sections of this guide into your ~/.openclaw/cron/jobs.json under the jobs array. Update delivery.to to your Telegram chat ID and adjust schedule.tz if you are not in America/Los_Angeles. The output directories (memory/idea_radar/ and projects/second-brain/data/idea-lab/) will be created automatically by the agent on the first run. The OpenClaw cron documentation at docs.openclaw.ai/automation/cron-jobs covers the full jobs.json schema, including all required and optional fields.
Why does the OpenClaw Idea Radar use isolated sessions instead of a named session?
Isolated sessions prevent context bleed between runs. If the Radar used a named session, each run would accumulate conversation history from all previous runs. After a week, the session would be carrying a large amount of stale feed data and prior idea lists, which wastes tokens and can confuse the agent's output. The dedup logic in the prompt handles the continuity that a named session would otherwise provide, but without the token overhead.
What happens if the OpenClaw Idea Radar fails or times out?
If the Radar times out or errors out, latest.json either does not exist or retains the previous day's content. The Deep Dive gate handles this: if latest.json is missing or stale, the Deep Dive exits with a note and does nothing. You will see the failure in your Telegram delivery channel if delivery.mode is set to announce. The timeoutSeconds value in the config controls how long the job runs before the platform terminates it. See the OpenClaw cron documentation for details on timeout behavior.
Can the OpenClaw Idea Radar source data from paid APIs?
Yes. The prompt is plain text, so you can replace any public endpoint with an authenticated one. If you have a Reddit API key, you can switch from the public .json endpoints to the full Reddit API for more data. If you subscribe to a trends service, add that endpoint to the sources list. The agent fetches whatever URLs you give it via exec and Python. The tradeoff is that adding auth means either storing credentials in the prompt text (not ideal) or writing them to a file the agent can read at runtime.
How does the OpenClaw Idea Radar avoid repeating ideas from previous days?
Both jobs include a mandatory dedup block. Before generating new ideas, the Radar reads all JSON files in memory/idea_radar/ and extracts every ideas[].title field. Any new idea that overlaps with a previous title is skipped. The Deep Dive does the same against projects/second-brain/data/idea-lab/ before picking which idea to write up. In practice, this means the Radar generates genuinely new ideas each day rather than cycling through the same concepts with slightly different names.
Evidence & Methodology
This guide is built from primary sources. The production cron configs come directly from ~/.openclaw/cron/jobs.json on the server running this setup. The sample output comes from memory/idea_radar/2026-03-22.json, a real run.
External sources:
- OpenClaw Cron Jobs Documentation covers session types, cron syntax, the croner library, and session retention behavior
- Hacker News Official API documents the Firebase-hosted endpoints used by hnrss.org
- Reddit API Documentation confirms that public subreddit
.jsonendpoints require no authentication - idea-reality-mcp provides useful contrast: it validates a single idea you provide, whereas this setup discovers ideas on its own
- GitHub trending workaround discussion confirms there is no official GitHub trending API
Related resources
Changelog
| Date | Change |
|---|---|
| 2026-03-23 | Initial publication |
Fixes when it breaks. Workflows when it doesn't.
OpenClaw guides, configs, and troubleshooting notes. Every two weeks.



