IT & Management ENG
August 21

From Idea to Release in Just a Couple Evenings: An AI-Powered Chrome Extension Built… with AI

Preamble

I saw a friend’s post on LinkedIn: “Just built my first Chrome extension.”

And I thought — why not give it a shot myself? Curiosity kicked in. That evening I sat down, hacked together a prototype, polished the UX/UI the next day, and shipped it to the Chrome Web Store. A week later, it got approved.

Even though I have a technical background, I had never built extensions before. But mix AI + the right prompts + a product mindset — and you’ve got a mini-product up and running in just a couple of evenings.

I’ve already recorded a quick demo, passed the verification, and published the extension in the Chrome Web Store 🏄

In this article, I’ll share how I started and what stack I picked, the features I implemented, screenshots, and a link so you can try it out yourself. I’ll also talk about how I coded it (with plenty of help from AI) and give some practical tips — purely from my own experience.

At the bottom, you’ll find links to my contacts, the extension (if you want to test it), docs, and the English demo video.

Why Build an Extension in the First Place

I’m not the type who hoards plugins, but I do keep a few essentials pinned: Phantom/MetaMask, Loom, a site analysis tool, Grammarly, AdBlock, a translator, iCloud Passwords.

So yeah, I’m an active user — and some extensions are genuinely useful. That got me thinking: what kind of pain points could I solve with a super simple extension, something hacked together in an evening, without over-engineering?

I went back to my own workflow. A huge chunk of my time goes into email and LinkedIn replies — recruiters, teammates, startups, crowdfunding, visas, manufacturing, suppliers in China… Every message needs to be read, thought through, and answered in the right tone.

At first, I only used Grammarly to fix mistakes but wrote the actual replies myself. Later I started pasting conversations into ChatGPT, crafting prompts, and getting draft responses. That quickly became my main approach to work emails.

But the loop was always the same: copy the context from Gmail/LinkedIn → paste into ChatGPT → ask for a draft → paste it back. It worked, but it broke focus. Not a dealbreaker, but definitely clunky.

So the idea was simple: make replies happen in-place, no switching windows. One click pulls in the email context, lets you add a quick instruction (prompt), generates a draft in the right tone, and drops it straight into the reply box.

By the way — if you’d like quicker updates on my journey into Web3, AI, and product management, I share them more often on my Telegram channel. Feel free to hop in.

Here’s what that means in practice:

  • No more copy-paste: context and draft live inside Gmail.
  • Tone control: Formal / Neutral / Friendly — pick what fits.
  • Flexible context: trim or expand before sending it to the model.
  • One-click reply: Generate → Regenerate (if needed) → Apply.
  • Zero setup overhead: just drop in your API key, pick a model, set a default system prompt if you want — and you’re good to go, right inside Gmail. (More on that later.)

What the MVP Looks Like: A 60-Second Walkthrough

I want you to get a feel for the product right away. Here’s one quick real-world scenario with just the key steps and screenshots.

No long guides — just the basics:

1. Configuration

Pin the extension icon, open Settings, drop in your OpenRouter API key, and hit save.

Choose a model: Pick a free one to get started, or plug in a paid option if you already have it.

Set a system prompt: Define your default prompt — the general rules your replies will follow.

2. Generate a reply with AI in Gmail

Open an email in Gmail, hit Reply — you’ll see a little “A” icon right next to it.

Click it — a window pops up with the email context already loaded. Add a custom prompt if needed, and choose the tone (Formal / Neutral / Friendly).

Generate → Regenerate → Apply — and the draft drops straight into your reply field.

Want a full step-by-step guide with screenshots? Check out the docs on GitBook.

How It Works Under the Hood (and Why It’s Built This Way)

At the start, I saw two possible paths:

A. Run my own server and API key

I’d proxy all the requests to the models, pay for the tokens, and take on the risks of storing and protecting private emails. For an MVP, that felt way too heavy — payments, data storage, security, and proof for the Chrome Web Store review.

B. Let users bring their own key, keep everything local

This is the route I went with. The user drops in their own API key, the extension grabs the email content directly from the DOM, and sends it to the LLM from the user’s side. I don’t see or store any data. Review becomes easier, and the security model is more transparent for users.

Next question: which AI to use?

GPT was the obvious candidate — but not everyone is ready to mess around with API keys, and paying upfront is a UX killer. I wanted a free option so people could install the extension and test it right away. Also handy for me — I didn’t want every debug request to burn dollars.

So I dug around and found a great option: OpenRouter. It’s an aggregator that connects multiple models (including GPT) and lets you give users free models out of the box. Plug it in — and you can start testing with zero friction.

Why OpenRouter? A single API across many LLMs. In just a couple of clicks you get a key, you instantly have access to free models, and if you want, you can switch to GPT, Gemini, DeepSeek, and more. The integration code is the same everywhere — fewer if/else branches.

That led to a dead-simple settings panel:

  • Field for the API key
  • Dropdown with available models (pulled dynamically based on the key)
  • System prompt — general rules for all replies (language, context handling, tone, etc.)

How I Built It: Prompt-Coding Instead of a JS Marathon

In 2025, of course, you can still “code everything by hand,” but for me speed and results matter more. I haven’t been a full-time developer for years — I started my career in code, but moved into product about eight years ago.

That said, nostalgia hit me recently and I built my own arbitrage bot: TypeScript on the client side, Rust for the on-chain smart contracts on Solana. So yeah, I can still roll up my sleeves and dive in when needed.

(If you’re curious, I’ve written a few deep dives on building that Web3 bot for Solana:)

But as a Product Manager, I keep it simple: if you can hack together an MVP in a couple of evenings to test whether it flies — why build a blimp?

So I went with coding-through-prompts. Started with the UI (popup, fields, buttons, selectors), then layered in the logic: parsing the DOM, calling OpenRouter, inserting replies, and some UX polish.

Claude

After I posted about the bot, my Telegram channel blew up. Some devs joined — we spun up a private on-chain/Web3 dev community. Others were potential bot users.

A few folks DM’d me with their own takes, and one guy told me flat out: “Forget GPT, try Claude — it’s miles better for coding.”

I remembered that tip and tested it recently. Honestly? Claude crushes it right now. It handles long contexts and large files without losing the thread, refactors HTML/CSS/JS in one go, while GPT often spits back broken snippets or chokes on big files. Claude just powers through.

I hit the free limits pretty fast and upgraded (~$20/month). As the project grew, the limits kicked in more often — but that was expected.

ChatGPT

I used it more tactically — quick JS/DOM fixes, finding parsing packages, or even generating a logo and some graphics.

The loop was fast and pragmatic:

  • Generate / fix code with a prompt
  • Drop files into VS Code
  • Load as unpacked in Chrome
  • Test manually

That’s the beauty of prompt-driven dev: tight iterations, instant feedback. Sometimes I even used GPT just to help craft better prompts.

Timeline — Nothing Fancy

  • Evening 1: Working MVP — DOM parsing, test calls to OpenRouter, model list, draft replies in the popup.
  • Day 2: UX and parsing polish, smoothing rough edges.
  • Day 3 (half day): Final tweaks and prep for Chrome Web Store.

Publishing to the Chrome Web Store: What to Know

The extension profile turned out way more detailed than I expected — descriptions, policies, regions, screenshots in exact sizes. My first submission actually got rejected: I had left extra permissions/dependencies in manifest.json that I wasn’t using. Cleaned it up, resubmitted, and it got approved on the second try. Now it’s live and running smoothly.

A few tips if you’re shipping an MVP extension:

  • Keep manifest.json lean. Don’t include anything you’re not actively using.
  • Skip the server for MVP. Let users provide their own API key — it simplifies review and avoids privacy headaches.
  • Provide a test API key for moderation and initial checks — it speeds up approval.
  • Avoid third-party APIs unless you absolutely need them.
  • Don’t load external scripts. Google hates that — theoretically the code could be swapped and harm users. Keep everything in the extension codebase.
  • Prep onboarding screenshots upfront: where to paste the key, pick a model, and how the popup looks.

I also put together proper documentation and a Privacy Policy to make the process smoother.

What’s Live Now and What’s Next

Current functionality:

  • Pulls email context from the DOM in Gmail
  • Default system prompt + custom prompt field for each reply
  • Tone switcher: Formal / Neutral / Friendly
  • Generate → Regenerate → Apply, with auto-insertion into the reply field
  • Settings panel: OpenRouter API key, model selector, link to GitBook docs

Where it’s heading:

  • Platform support: LinkedIn → Glassdoor → Jinni (same in-place approach)
  • Design refresh: cleaner visuals, preset styles
  • Onboarding flow: 30-second quick start, no docs required
  • Smarter context filtering: pick which paragraphs/participants to include
  • Prompt templates: save and reuse for different goals in one click

Try the Extension

I’d really appreciate feedback: what feels smooth, what gets in the way, and what’s missing. Based on interest, I’ll prioritize features for the next release :)

A Few Words on AI Tools More Broadly

This isn’t my first time closing work gaps with AI. For my arbitrage bot Orbion, I built the SPA on Vercel. For quick audio/video transcriptions, I set up Whisper to run locally on my laptop.

Next up, I’m planning a simple cross-platform mobile app — again leaning on existing AI models to solve very specific product problems for a clearly defined audience.

Nothing overly complex here: the right packaging and solid UX often matter more than retraining models. I use AI every day and see how much of a boost it gives. If anyone reading this is interested in collaborating on launching a couple of these mini-products — I’m open. I can build them solo (tech background helps), but it’s always faster and more fun to do it together.

My socials.

Follow if you’re curious 🗿