Back to Blog
March 10, 20256 min read

The AI Tools I Actually Use Every Day as a Senior Engineer

AI ToolsDeveloper ProductivityLLMsCode GenerationEngineering

The AI Tools I Actually Use Every Day as a Senior Engineer

There's a lot of noise in the AI tools space. Every week brings another "revolutionary" tool that promises to 10x your productivity. After testing dozens of them over the past two years, here are the ones that actually stuck in my workflow — and more importantly, how I use them to get real work done.

1. Claude for Architecture & Code Review

Claude has become my go-to thinking partner for architectural decisions. Where ChatGPT often gives me "sounds good" answers, Claude pushes back on my assumptions and finds edge cases I miss.

How I actually use it:

  • Architecture reviews: Before building a new service, I describe the requirements and constraints, then ask Claude to poke holes in my proposed design. It's caught race conditions and scaling bottlenecks I would have discovered much later.
  • Code review assistance: I paste complex PRs and ask for a review focused on security, performance, or maintainability. It catches things like SQL injection vectors and missing error boundaries that are easy to overlook in large diffs.
  • Documentation generation: After writing code, I use Claude to generate API documentation, sequence diagrams (in Mermaid), and architecture decision records (ADRs).
# Example: I use Claude to help design state machines
# Before writing code, I describe the workflow in natural language
# and get back a validated state diagram
 
"""
Prompt I actually use:
"I need a state machine for order processing. States: pending,
payment_processing, paid, fulfilling, shipped, delivered, cancelled,
refunded. What transitions am I missing? What edge cases should I handle?"
"""

What it's NOT good for: Generating boilerplate code in frameworks it hasn't seen much of. For that, I use Copilot.

2. GitHub Copilot for In-Editor Completion

I was skeptical of Copilot for months. Then I realized I was using it wrong — treating it like a code generator instead of an autocomplete on steroids.

The workflow that works:

  1. Write a clear function signature and docstring
  2. Let Copilot suggest the implementation
  3. Review, adjust, move on

Where Copilot really shines is repetitive patterns. Writing database migration files, test fixtures, API endpoint boilerplate — it saves me 30-40% of keystroke time on these tasks.

My actual settings:

  • I keep it enabled for Python, TypeScript, and SQL
  • I disable it for configuration files (too many wrong suggestions)
  • I use the chat feature for explaining unfamiliar codebases

3. Cursor IDE for Complex Refactoring

Cursor changed how I approach large refactoring tasks. The ability to select a block of code, describe the transformation I want, and get a diff preview is incredibly powerful.

Real example from last month:

I needed to migrate 40+ API endpoints from Flask to FastAPI. Instead of manually rewriting each one, I used Cursor to:

  1. Select a Flask route handler
  2. Prompt: "Convert to FastAPI with Pydantic models for request/response, add proper status codes and error handling"
  3. Review the diff, accept or adjust
  4. Repeat

What would have taken a week took two days, and the output quality was consistently high because I was reviewing every change.

4. Qdrant + LangChain for Knowledge Management

This isn't a "tool" in the traditional sense, but I've built an internal knowledge base using Qdrant and LangChain that my entire team uses daily.

The setup:

  • All our internal documentation, runbooks, and post-mortems are indexed in Qdrant
  • A Slack bot powered by LangChain lets anyone query the knowledge base in natural language
  • It cites sources, so you can verify answers
from langchain_community.vectorstores import Qdrant
from langchain_openai import OpenAIEmbeddings
 
# Our internal knowledge base setup
embeddings = OpenAIEmbeddings(model="text-embedding-3-large")
vectorstore = Qdrant(
    client=qdrant_client,
    collection_name="team_knowledge",
    embeddings=embeddings,
)
 
# Hybrid search: dense vectors + keyword filtering
retriever = vectorstore.as_retriever(
    search_type="mmr",
    search_kwargs={"k": 5, "fetch_k": 20}
)

Impact: New team members get up to speed 3x faster because they can ask questions and get answers from our collective knowledge instead of interrupting senior engineers.

5. v0 by Vercel for UI Prototyping

When I need to quickly prototype a UI component or page layout, v0 is remarkably good. I describe what I want in natural language, and it generates React + Tailwind code that's usually 80% of the way there.

Where it excels:

  • Dashboard layouts
  • Form designs with validation states
  • Card grids and list views
  • Landing page sections

Where it falls short:

  • Complex interactive components (drag-and-drop, real-time updates)
  • Anything requiring specific state management patterns
  • Accessibility nuances

6. Perplexity for Technical Research

When I need to understand a new technology, debug an obscure error, or compare approaches, Perplexity has replaced my Google-then-read-5-articles workflow.

Example queries that save me time:

  • "What are the trade-offs between Qdrant and Pinecone for production RAG pipelines?"
  • "How does FastAPI handle connection pooling with SQLAlchemy async?"
  • "What's the recommended approach for LangGraph state persistence in distributed systems?"

The key advantage: it cites sources inline, so I can verify claims and go deeper when needed.

Tools I Tried and Dropped

Not everything sticks. Here's what I stopped using and why:

  • Tabnine: Copilot is simply better for my use cases
  • Amazon CodeWhisperer: Good for AWS-specific code, but I don't write enough of it
  • Bard/Gemini for coding: Inconsistent quality, especially for Python backend work
  • Auto-GPT: Cool demo, not useful for real work yet

My Actual Productivity Impact

After tracking my output for three months with and without AI tools:

  • Code writing speed: ~35% faster (mostly from Copilot reducing boilerplate)
  • Bug discovery: ~20% more bugs caught before PR review (Claude code reviews)
  • Architecture quality: Hard to quantify, but my designs have fewer revision cycles
  • Documentation: 50% less time spent, and the output is often better than what I'd write manually

The Meta-Lesson

The engineers who benefit most from AI tools aren't the ones who use the most tools — they're the ones who deeply integrate 3-4 tools into their existing workflow. Don't chase every new release. Pick tools that solve your specific pain points, learn them well, and let the hype cycle play out for everything else.


What AI tools have stuck in your workflow? I'm always looking for recommendations, especially for DevOps and infrastructure work where I feel the tooling is still immature.