
TECHDADSLIFE.COM | Artificial Intelligence | Behind the Build
I have wanted to write this article for a while. Not because it makes me look clever — there were enough moments during this build where I definitely did not look clever — but because it is genuinely the most interesting tech project I have done in years. And it is one that I think a lot of people will want to replicate.
The question I started with was simple: could I use AI to build, populate, and run an entire tech blog without writing a single line of code myself or manually producing a single article? Not “AI helps me write faster” but “AI does the whole thing, end to end, while I act as the editor and decision-maker.”
The honest answer is: almost. The pipeline runs every morning at 6am, writes a new article, generates a featured image, and publishes it to the live site without me touching anything. But getting there took more troubleshooting than I expected, and the problems were interesting enough to be worth writing about in detail.
The Starting Point: Why Not Just Use WordPress?
I have built WordPress sites before. I find them bloated, slow, and full of plugins that break each other. I wanted something fast, cheap to host, and easy to understand end to end. That pointed me towards a static site generator, and Hugo specifically — it is written in Go, builds sites in seconds, and produces pure HTML that you can host on the cheapest shared hosting available.
The problem with Hugo is that it assumes you know what you are doing. There is no admin panel. No drag and drop. You write markdown files, run a build command, and upload the output. For a developer, that is elegant. For someone used to WordPress, it can feel brutal.
That is where Anthropic’s Claude came in. I use Claude in Cowork mode, which gives it access to my files and a sandboxed Linux shell. I told it what I wanted, and we built it together — me describing the vision, Claude writing the code, running commands, and troubleshooting when things went wrong.
The Stack
Before getting into how it works, here is the technology involved:
| Layer | Tool | Purpose |
|---|---|---|
| Static site generator | Hugo v0.147.0 | Builds the HTML from markdown content |
| Theme | Custom “techdads” | Built from scratch — no templates |
| Article writer | Claude claude-opus-4-6 | Writes 1,000 to 2,200 word articles |
| Image generator | DALL-E 3 (1792x1024) | Featured image for every article |
| Search | Pagefind v1.5.2 | Client-side search, no server needed |
| Hosting | Shared hosting via FTP | Basic and cheap — works perfectly |
| Newsletter | Beehiiv | Weekly digest at techdadslife.beehiiv.com |
| Analytics | Google Analytics 4 | Standard traffic tracking |
| Scheduler | Cowork scheduled task | 6am daily trigger |
| Language | Python 3 | Glue for the whole pipeline |
The two main scripts are generate_article.py (the content pipeline) and deploy.py (the build and upload system). Together they handle everything from picking a topic to the article being live on the site.
How the Site Architecture Works
The content and the site are completely separate concerns. Articles are just text files. Hugo turns them into HTML. The deploy script uploads the HTML. None of these pieces know about each other — they are just files and scripts.
The Content Generation Pipeline
This is the part that runs every morning. The full pipeline has evolved considerably since I first built it, mostly because of problems I hit. Here is where it is now:
The three writing steps are worth explaining in more detail.
Step 2a: Research
Before writing a single word, the pipeline makes up to five web searches using the Claude API’s built-in search tool. It asks Claude to find current prices, policies, statistics, and recent news relevant to the article topic. The results come back as a research briefing that gets passed to the writer.
This step was added after a painful lesson, which I will get to shortly. The short version: without it, Claude was confidently writing articles based on training data with a knowledge cutoff, and some of that data was wrong or years out of date. The research step grounds the article in current reality.
Step 2b: Writing
Claude writes the article using the research briefing as its source of truth, the voice guide (more on that below), and a structural template matched to the article type. There are five templates: analysis for deep-dives, news for quick takes, buyers_guide for product recommendations, explainer for “what even is X?” pieces, and how_to for step-by-step guides.
Step 2c: The Editor Pass
After writing, a second Claude call acts as an editor. It reads the finished article against a 10-point checklist called WRITING_RULES and fixes any violations. Things like em dashes (I cannot stand em dashes), invented family dialogue, British English consistency, and factual claims that are not backed by the research. It does not restructure the article — it just cleans it.
The Ghost-Writer Brief
One of the most important parts of the pipeline is the voice guide — a detailed brief that tells Claude who it is writing as. Without it, the articles sound like generic AI content. With it, they sound like me.
The brief is organised into sections:
Family and life: Married, live in Hampshire, three kids (boys aged 13 and 17, daughter aged 20). Dinner is always at the table. Working man, not wealthy — budget and value matter. No invented family conversations in the articles.
Background: Born in Zimbabwe, grew up in Johannesburg, moved to London at 26, now settled in Hampshire. I take SWR trains to London Waterloo and use Heathrow and Gatwick.
Tech I actually own: Bambu Lab P2S 3D printer in the garage, Mac Mini M4, ASUS ROG setup, Tesla Model 3 Long Range, Samsung Galaxy Ultra, PS5, Xbox Series X, DJI Osmo Pocket 3, Nikon D800, HDHomeRun for recording live TV, mesh WiFi network. The brief includes all of this so articles can reference real gear rather than invented equipment.
Things I have built myself: Ender 3D printers from kits, RC cars, drones, a local AI stack using Ollama and LM Studio, a Plex home media server on NAS. This is the detail that makes the tech content feel earned.
Personality: Engineer mindset, methodical, pragmatic, fair. Hates inefficiency and queuing. Wants to laugh. These traits inform the writing voice without ever being stated directly in the articles.
Writing rules: British English throughout, prices in GBP first, no em dashes, no hypothetical family conversations, first person, never mention my age.
The Topic Queue
Articles do not get made up on the fly (well, they can be if the queue is empty, but that is the fallback). The planned content lives in topic_queue.json, a structured list of upcoming articles:
{
"date": "2026-05-01",
"title": "Best Dashcams for UK Roads in 2026",
"category": "Tech Bench",
"template": "buyers_guide",
"angle": "UK-specific guide focusing on ANPR-safe mounting and dual-cam setups for school run cars",
"image_prompt": "Dashcam mounted on a car windscreen, motorway in background, morning light",
"image_style": "hyper_realistic",
"done": false
}
Each entry has a date, a title, an editorial angle, the article template, and an image description. When the pipeline runs, it picks the oldest overdue entry, processes it end to end, marks it done, and moves on. I can queue up weeks of content in advance and the site runs itself.
The Deploy Script
The deploy script is surprisingly capable for something that just uploads files. It does four things:
- Runs
hugo --minifyto build the full site intosite/public/ - Runs Pagefind to rebuild the search index
- Compares MD5 hashes of every local file against a cache from the last run
- Uploads only files that are new or changed via FTP
On a normal daily publish, only 15 to 30 files need uploading — the new article page, the homepage (which shows the latest article), the category page, the RSS feed, and the search index. The rest of the site’s 3,300+ files are identical and get skipped. What would take eight minutes as a full upload takes under two minutes as an incremental one.
The Troubleshooting Section (This Is the Honest Part)
Here is where it gets interesting. Nothing went smoothly first time. These are the real problems I hit, in roughly the order they happened.
Problem 1: The Hugo Binary Architecture Mismatch
The Cowork scheduler runs in a sandboxed Linux environment. My Mac Mini runs macOS. The deploy script was downloading Hugo for linux-amd64 (the standard x86 architecture), but the Cowork sandbox runs on arm64 (ARM). The amd64 binary would download, fail silently, and the build would not happen.
The fix was to make the deploy script detect the CPU architecture at runtime and download the matching binary:
import platform
machine = platform.machine().lower()
arch_tag = "linux-arm64" if machine in ("arm64", "aarch64") else "linux-amd64"
url = f"https://github.com/gohugoio/hugo/releases/download/v0.147.0/hugo_extended_0.147.0_{arch_tag}.tar.gz"
Two lines of code. Took longer than it should have to diagnose.
Problem 2: Articles Publishing But Not Appearing
This one was genuinely confusing. The pipeline ran, the article was generated, the deploy script said it uploaded successfully. But the article was not on the site. The homepage still showed the previous day’s content.
The cause: generate_article.py was writing the article date as 2026-04-21T06:00:00 — six in the morning UK time. The sandbox runs in UTC. So when Hugo built the site, it was 05:23 UTC, and a T06:00:00 article without a timezone was treated as 37 minutes in the future. Hugo excludes future articles by default.
The fix was trivially simple. Write plain dates:
date: 2026-04-21
No time component, no timezone ambiguity. Hugo includes anything dated today or earlier. One line changed, never happened again.
Problem 3: Deploy Timeouts
Full deploys were timing out. The Cowork environment has a 10-minute session limit, and uploading 4,800 files over FTP to shared hosting takes longer than that.
The solution was to split the process into two commands:
python3 deploy.py --full --skip-images # Build Hugo, upload everything except unchanged images
python3 deploy.py --no-build # Skip the build, just upload whatever is in public/
The --skip-images flag means that on a big template change (when every HTML file needs re-uploading because it references a new fingerprinted CSS filename), the 3,000+ image files are hash-checked rather than force-uploaded. The --no-build flag means if the first command times out mid-upload, you can resume without rebuilding from scratch.
Problem 4: Browser Image Caching
I replaced two featured images with better official product photos — the Bambu Lab X2D and the Creality SPARKX i7. Uploaded the new images to the same file paths, deployed, refreshed the browser. The old images were still showing.
The problem is that browsers cache images aggressively by path. If bambu-x2d.png was cached as an old image, requesting bambu-x2d.png again returns the cached version regardless of what is on the server.
The fix: rename the file. bambu-x2d-official.png is a new URL the browser has never seen before, so it downloads fresh. Update the markdown to point at the new filename. Simple once you understand the mechanism — frustrating when you do not.
Problem 5: The Hallucinated CES Article
This is the one that prompted the biggest structural change. The pipeline generated an article about attending CES. It was confident, well-written, and wrong on multiple important points:
- It said CES is open to anyone who registers. It is not. CES is a trade-only event. You need proof of industry affiliation to get a badge.
- It said January is off-peak for Las Vegas hotels so prices are reasonable. The opposite is true. CES is the biggest event Vegas hosts. Hotels go up 300 to 500 per cent during CES week.
- It quoted a UK government grant programme for attending international trade shows. That programme was scrapped in 2021.
- It described a large, growing UK pavilion at CES. The UK presence has actually been shrinking, dropping from over 100 exhibitors in 2019 to fewer than 30 in 2026.
All of this was stated confidently, with no caveats. A reader planning a trip to CES based on that article would have had a very unpleasant surprise.
The root cause is that Claude’s training data has a knowledge cutoff, and it does not know what it does not know. It fills gaps with plausible-sounding information from whatever it has seen — which for a topic like CES might include articles that are three or four years old.
The fix was the web research step described earlier. Before writing anything, the pipeline now searches for current information and passes it as verified context. The writer is told: if a fact is not in the research and you cannot be certain it is current, omit it rather than state it. The editor pass then checks any remaining claims against the research briefing.
Problem 6: Sidebar Showing the Wrong Categories
The sidebar on the right side of the site had a dynamic categories widget that listed every category used in any article’s front matter, sorted by post count. When I restructured from 13 categories to 8, two old articles still had stale category tags (Wearables and Smart Home) in their front matter. These showed up in the sidebar alongside the 8 official categories.
The solution was to fix the source data rather than hack the template. I found the two articles with incorrect tags, removed the stale categories, and redeployed. The dynamic widget then reflected reality accurately.
Where It Is Now
The site runs daily without intervention. Each morning:
- A new 1,500 to 2,200 word article is researched, written, reviewed, and published
- A unique featured image is generated and optimised
- The search index is rebuilt
- Only changed files are uploaded
The pipeline takes about 8 to 10 minutes end to end. The research and review steps added roughly two minutes compared to the original single-pass write, but the quality improvement is significant. Articles are no longer stating outdated policies as current fact.
The queue currently has around 16 articles scheduled across the next few weeks. When that runs out, the pipeline auto-generates a new topic based on the site’s categories and the last 30 published titles (to avoid repetition). In practice, I keep the queue topped up because I want control over what gets published, but the fallback is there.
What I Would Do Differently
If I were starting this again, a few things I would change:
Build the research step first. The hallucinated CES article was embarrassing. Web search should have been part of the pipeline from day one, not bolted on after a problem.
Test the scheduler in its actual environment sooner. Most of the arm64/amd64 issues and the deploy timeout issues only surfaced when the scheduled task ran — not during manual testing on my Mac. The sandbox is a different environment to my machine, and I should have tested in it earlier.
Date fields: never use timestamps. The plain date lesson was obvious in retrospect. If you are generating content on a schedule and deploying across timezone boundaries, a T06:00:00 timestamp will cause you problems. Plain YYYY-MM-DD is always the right answer for blog post dates.
The Next Steps
A few things still on the list:
- Swap the Cowork scheduler for macOS launchd. The scheduler works but the sandbox is inherently more fragile than running directly on my Mac Mini. I am watching the current setup for another week or so before making the switch.
- Real affiliate links. The pipeline generates placeholder links formatted as
[AFFILIATE: Product Name]. These get styled into product cards on the site, but the actual Amazon URLs need adding manually. I want to automate that lookup. - Comment on articles I write myself. The Samsung earbuds piece and the Spotify fix guide are coming — those are mine from scratch, which feels different and more personal.
If any of this is useful to you and you want to try something similar, the tech is all accessible and most of it is free to get started with. Hugo is open source, the Anthropic and OpenAI APIs are pay-per-use (the daily article costs roughly 20 to 30 pence in API credits), and basic shared hosting is a few pounds a month.
The interesting part is not the code. It is the architecture — deciding how the pieces fit together, what gets automated, and what still needs a human in the loop. That balance is still being calibrated. But for a daily tech blog running on a budget, it is working better than I expected.
Questions or want to know more about any specific part of the build? Find me on X / Twitter or subscribe to the weekly newsletter below.
If you want to stay across what I am building, the weekly newsletter covers the site updates, tech I am using, and anything interesting I find during the week. Subscribe free at techdadslife.beehiiv.com.
