Application

AI Systems & Execution Control

Structured answers to your application questions. I do this daily - figured I'd show rather than tell.

Built in under 10 minutes 4 min read

How would you manage a founder with 100 active ideas?

I wouldn't try to slow you down. That's what makes you useful. I'd build a capture-and-triage system so you keep generating and I keep things from falling apart.

Capture everything, filter nothing at intake. Every idea goes into one backlog. Voice note, Slack message, 3am text - doesn't matter. I turn it into a structured entry: what it is, what it solves, rough effort, what it depends on.

Triage weekly. Three filters: (1) Does this move revenue or reduce friction? (2) Can we do it without killing what's in progress? (3) Is this urgent or just exciting?

Hold the line. Max 5 active, max 3 weekly priorities. Not arbitrary - that's where I've seen quality consistently drop.

I've run this system for 18 months as Head of Technology for a CEO who operates at this speed. Before that, directly with Upwork's founder on the same kind of work. It works when you earn enough trust to say "not yet" and nothing gets lost.

What system would you build to control project sprawl?

Something lightweight. Four states:

Each active project: one-line goal, definition of done, owner, status, next action. Updated async, reviewed live once a week.

The rule: something new enters Active only if something else ships or gets parked. Tradeoffs stay visible. Nothing quietly piles up.

For AI work (Claude Code, OpenClaw) - I'd keep a prompt library and template registry alongside the board. Completed work should compound, not get rebuilt.

How would you push back on too many initiatives?

Directly. Data, not opinions.

"Here's what's active. If we add this, one of these three slows down. Which one?"

I don't say no. I make the cost of yes visible. Founders don't resist structure - they resist being blocked. So I stay fast enough that good ideas don't die waiting, and honest enough that pushback is trusted when it counts.

If something really can't wait - I scope a stripped version. 20% effort, 80% learning. Ship that, then decide if the full build earns its spot.

AI Workflows, Prompt Libraries & SOPs

This isn't theoretical for me. Here's what I've built in the last 12 months:

Claude Code projects. I use Claude as core infrastructure daily - not for one-off queries, but for structured, repeatable workflows. I build project-specific prompt libraries, organize them by function, and document what works so the next build is faster.

Automation pipelines. At my current company I designed and implemented AI-driven automation that increased automated task coverage by 30% monthly. This included audit of existing processes, identifying automation candidates, building the pipelines, and writing SOPs so the team could maintain them without me.

Project architecture. I maintain organized, documented systems across multiple concurrent workstreams. Prompt templates, workflow diagrams, decision logs, SOP documentation - all structured so completed work compounds.

The judgment call I make constantly: what should be automated vs what needs a human. Most people over-automate or under-automate. I've gotten good at finding the line.

AI Infrastructure Stock Tracking & Insights

Breaking this down into something buildable.

System Architecture

Data Layer
Stock price feeds (API) · Earnings calendar + transcripts · Research reports (manual upload or RSS)
Processing Layer
Claude analysis pipeline · Earnings summarizer · Research report extractor · Weekly insight generator
Output Layer
Weekly digest (formatted for posting) · Alert system (earnings surprises) · Searchable archive
Orchestration
Scheduler (cron or Make) · Pipeline controller: ingest → process → review → publish

MVP Scope

Cut everything non-essential. One output first: weekly AI infrastructure stock summary, auto-generated and ready to post.

Everything else (real-time monitoring, research report parsing, automated alerts) is Phase 2+.

First 3 Build Steps

1

Define watchlist and data sources

Pick 10-15 tickers. Set up API access for price data and earnings calendar. Test reliability.

2

Build the Claude prompt chain

Reusable template: raw weekly data in, structured insight out. Test with historical data. Iterate until publishable.

3

Wire the automation

Data pull (scheduled) → Claude processing → staging for review. Manual approval before publishing. Document as SOP.

Automated vs Manual

Component Approach
Stock price data pull Automated API on weekly schedule
Earnings calendar check Automated API trigger
Research report ingestion Manual Founder uploads relevant reports
Claude analysis Automated Prompt chain on data
Quality review Manual Founder reviews before posting
Publishing Manual Automate in Phase 2

Automate the boring parts first. Keep human judgment on output quality until the prompts prove themselves.

Background

Head of Technology at a design and manufacturing company. Distributed team across engineering, automation, and product. Before that - 6 years in SaaS PM, including working directly with Upwork's founder on exactly this kind of structured execution.

Pursuing a Master's in Applied AI. I use Claude daily - not as a novelty, as core infrastructure. Increased automated task coverage by 30% monthly at my current company, mostly by being honest about what should be automated and what shouldn't.

Based in Europe (Berlin/Belgrade). Async-first. Direct communicator. I don't pad bad news and I don't let good ideas collect dust in backlogs.

How this page was made

Structured in Claude, styled in HTML, deployed to Vercel in under 10 minutes. No templates, no frameworks. This application is the demo.