Blog

About this blog

Thoughts, notes and observations as I journey through the world of tech.

Posts

  • Dev Blog - Securing the Remote MCP Adapter

    TL;DR: I built Remote MCP Adapter to solve remote file and artifact handling. Then I realized the same adapter could also be abused unless it actively defends against poisoned tool metadata and session misuse. So v0.3.0 is about turning that middleware into a safer boundary, not just a convenient one.

  • Dev Blog - Proving the Remote MCP Adapter's Security Guardrails part 1

    TL;DR: In the last post, I said v0.3.0 would harden the Remote MCP Adapter against poisoned tool metadata and weak session semantics. This post is the proof. I built a mutable mock MCP server, ran live adapter instances against it, and captured evidence for four security controls: tool-definition pinning, metadata sanitization, description minimization, and session-integrity binding.

  • Crossing limits

    TL;DR: Too many MCP tools in the context window slow agents down and worsen tool selection. GitHub tackles this with clustering and embedding-guided routing. In remote-mcp-adapter, Code-Mode avoids the problem by letting agents discover tools progressively instead of loading them all upfront.

  • Remote MCPs as local

    TLDR; Check out remote-mcp-adapter which provides stateful proxies for upstream MCP servers and handles the file exchange interaction with “file-touching” tools.

  • A skill issue

    TL;DR: LLMs can accelerate development dramatically, but they also widen the ownership gap if you don’t understand your system deeply. Without discipline, agents tend to duplicate logic, bloat control files, and create subtle technical debt. Tooling like persistent “skills” helps, but clean architecture is still a developer responsibility.

  • The curse of context windows

    TL;DR: Large-document extraction with LLMs fails less from “bad reasoning” and more from hard output limits. JSON structured outputs waste tokens on repeated keys and still truncate on big PDFs. Switching to CSV reduces overhead but doesn’t fix truncation—your output can still cut off silently. The reliable fix is chunking the document into page batches, processing chunks asynchronously with strict concurrency limits (semaphores), and stitching results back in order; run summarization as a separate pass.

subscribe via RSS