Blog

Thoughts, notes and observations as I journey through the world of tech.

Posts

  • A skill issue

    TL;DR: LLMs can accelerate development dramatically, but they also widen the ownership gap if you don’t understand your system deeply. Without discipline, agents tend to duplicate logic, bloat control files, and create subtle technical debt. Tooling like persistent “skills” helps, but clean architecture is still a developer responsibility.

  • The curse of context windows

    TL;DR: Large-document extraction with LLMs fails less from “bad reasoning” and more from hard output limits. JSON structured outputs waste tokens on repeated keys and still truncate on big PDFs. Switching to CSV reduces overhead but doesn’t fix truncation—your output can still cut off silently. The reliable fix is chunking the document into page batches, processing chunks asynchronously with strict concurrency limits (semaphores), and stitching results back in order; run summarization as a separate pass.

subscribe via RSS