Back to Blog
May 12, 20258 min read

Building AI Agents in Laravel: A Deep Dive into the Prism Package

LaravelAI AgentsPHPPrismLLM IntegrationBackend

Building AI Agents in Laravel: A Deep Dive into the Prism Package

For years, building AI-powered features in Laravel meant either calling OpenAI's API with raw HTTP requests or shelling out to Python microservices. That's changed dramatically. The Prism package (by EchoLabs) brings first-class LLM integration to Laravel with an elegant, Laravel-native API.

After shipping three production features with Prism, I want to share what I've learned — from basic text generation to building full AI agents with tool use and multi-step reasoning.

Why Build AI Agents in Laravel?

If your application is already in Laravel, adding a Python sidecar service for AI means:

  • Another deployment to manage
  • Cross-service authentication
  • Network latency for every AI call
  • Two codebases to maintain

With Prism, your AI logic lives alongside your business logic. Your agents can directly access Eloquent models, query the database, dispatch jobs, and use your existing service classes. No API boundaries, no serialization overhead.

Getting Started with Prism

composer require echolabs/prism

Basic text generation — the "hello world":

use EchoLabs\Prism\Prism;
 
$response = Prism::text()
    ->using('anthropic', 'claude-sonnet-4-5-20241022')
    ->withPrompt('Explain dependency injection in one paragraph.')
    ->asText();
 
echo $response->text;

Simple enough. But the real power comes from structured output and tool use.

Structured Output with Prism

One of the biggest headaches with LLMs is parsing their output. Prism solves this with structured output that maps directly to PHP objects:

use EchoLabs\Prism\Prism;
 
class ProductAnalysis
{
    public function __construct(
        public string $sentiment,
        public float $confidence,
        public array $keyThemes,
        public string $summary,
        public ?string $actionRequired,
    ) {}
}
 
$analysis = Prism::structured()
    ->using('openai', 'gpt-4o')
    ->withStructuredOutput(ProductAnalysis::class)
    ->withPrompt("Analyze this customer review: {$review->content}")
    ->asStructured();
 
// $analysis->object is a ProductAnalysis instance
$sentiment = $analysis->object->sentiment; // "positive"
$confidence = $analysis->object->confidence; // 0.92

This is a game-changer. No more regex parsing, no more "please respond in JSON format" prayers. The output is validated and typed.

Building a Real AI Agent with Tool Use

Here's where it gets exciting. Prism supports tool use — you define tools that the LLM can call, and it decides when and how to use them.

Example: An intelligent customer support agent

use EchoLabs\Prism\Prism;
use EchoLabs\Prism\Tool;
 
// Define tools the agent can use
$lookupOrderTool = Tool::as('lookup_order')
    ->for('Look up a customer order by order number')
    ->withStringParameter('order_number', 'The order number to look up')
    ->using(function (string $order_number): string {
        $order = Order::where('number', $order_number)
            ->with(['items', 'shipment', 'customer'])
            ->first();
 
        if (!$order) {
            return "Order {$order_number} not found.";
        }
 
        return json_encode([
            'number' => $order->number,
            'status' => $order->status,
            'items' => $order->items->map->only(['name', 'quantity', 'price']),
            'shipped_at' => $order->shipment?->shipped_at,
            'tracking' => $order->shipment?->tracking_number,
            'total' => $order->total,
        ]);
    });
 
$initiateRefundTool = Tool::as('initiate_refund')
    ->for('Initiate a refund for an order. Only use when customer explicitly requests.')
    ->withStringParameter('order_number', 'The order number to refund')
    ->withStringParameter('reason', 'The reason for the refund')
    ->using(function (string $order_number, string $reason): string {
        $order = Order::where('number', $order_number)->first();
 
        if (!$order || !$order->isRefundable()) {
            return "Order {$order_number} is not eligible for refund.";
        }
 
        $refund = RefundService::initiate($order, $reason);
 
        return "Refund initiated. Refund ID: {$refund->id}. " .
               "Amount: \${$refund->amount}. " .
               "Expected processing time: 3-5 business days.";
    });
 
$checkInventoryTool = Tool::as('check_inventory')
    ->for('Check if a product is currently in stock')
    ->withStringParameter('product_name', 'The product name to check')
    ->using(function (string $product_name): string {
        $products = Product::where('name', 'LIKE', "%{$product_name}%")
            ->select(['name', 'sku', 'stock_quantity', 'price'])
            ->limit(5)
            ->get();
 
        if ($products->isEmpty()) {
            return "No products found matching '{$product_name}'.";
        }
 
        return $products->map(function ($p) {
            $status = $p->stock_quantity > 0 ? "In Stock ({$p->stock_quantity})" : "Out of Stock";
            return "{$p->name} (SKU: {$p->sku}) - {$status} - \${$p->price}";
        })->join("\n");
    });
 
// Create the agent
$response = Prism::text()
    ->using('anthropic', 'claude-sonnet-4-5-20241022')
    ->withSystemPrompt(<<<PROMPT
        You are a helpful customer support agent for an e-commerce store.
        You can look up orders, check inventory, and initiate refunds.
 
        Guidelines:
        - Always verify the order exists before taking action
        - Only initiate refunds when the customer explicitly asks
        - Be empathetic and professional
        - If you can't resolve the issue, explain what the customer should do next
    PROMPT)
    ->withTools([$lookupOrderTool, $initiateRefundTool, $checkInventoryTool])
    ->withMaxSteps(5)  // Allow up to 5 tool calls per request
    ->withPrompt($customerMessage)
    ->asText();

The withMaxSteps(5) is crucial — it allows the agent to call multiple tools in sequence. The agent might:

  1. Look up the order
  2. Check inventory for a replacement
  3. Initiate a refund
  4. Respond with a summary

All in a single request, with the LLM deciding the sequence based on the conversation.

Integrating Agents into Laravel Controllers

Here's how I structure AI agent endpoints in a real Laravel application:

class SupportChatController extends Controller
{
    public function __construct(
        private SupportAgentService $agentService,
        private ConversationRepository $conversations,
    ) {}
 
    public function message(MessageRequest $request): JsonResponse
    {
        $conversation = $this->conversations->findOrCreate(
            userId: $request->user()->id,
            sessionId: $request->input('session_id'),
        );
 
        // Add the user message to conversation history
        $conversation->addMessage('user', $request->input('message'));
 
        // Run the agent with full conversation context
        $response = $this->agentService->respond(
            conversation: $conversation,
            message: $request->input('message'),
        );
 
        // Save the assistant response
        $conversation->addMessage('assistant', $response->text);
 
        // Log token usage for cost tracking
        TokenUsageLog::create([
            'conversation_id' => $conversation->id,
            'input_tokens' => $response->usage->inputTokens,
            'output_tokens' => $response->usage->outputTokens,
            'model' => 'claude-sonnet-4-5-20241022',
            'cost_cents' => $this->calculateCost($response->usage),
        ]);
 
        return response()->json([
            'message' => $response->text,
            'session_id' => $conversation->session_id,
        ]);
    }
}

And the service class:

class SupportAgentService
{
    private array $tools;
 
    public function __construct(
        private OrderService $orders,
        private RefundService $refunds,
    ) {
        $this->tools = $this->registerTools();
    }
 
    public function respond(Conversation $conversation, string $message): TextResult
    {
        $messages = $conversation->messages->map(function ($msg) {
            return new Message($msg->role, $msg->content);
        })->toArray();
 
        return Prism::text()
            ->using('anthropic', 'claude-sonnet-4-5-20241022')
            ->withSystemPrompt($this->getSystemPrompt())
            ->withMessages($messages)
            ->withTools($this->tools)
            ->withMaxSteps(5)
            ->withPrompt($message)
            ->asText();
    }
 
    private function registerTools(): array
    {
        return [
            $this->createOrderLookupTool(),
            $this->createRefundTool(),
            $this->createInventoryTool(),
        ];
    }
}

RAG in Laravel with Prism

You can build RAG (Retrieval-Augmented Generation) pipelines entirely in Laravel:

class KnowledgeBaseService
{
    public function query(string $question): string
    {
        // 1. Generate embedding for the question
        $embedding = Prism::embeddings()
            ->using('openai', 'text-embedding-3-small')
            ->fromInput($question)
            ->asEmbeddings();
 
        // 2. Search vector database for relevant documents
        $relevantDocs = $this->vectorSearch(
            $embedding->embeddings[0]->embedding,
            limit: 5,
        );
 
        // 3. Generate answer with context
        $context = collect($relevantDocs)
            ->map(fn ($doc) => "Source: {$doc->title}\n{$doc->content}")
            ->join("\n\n---\n\n");
 
        $response = Prism::text()
            ->using('anthropic', 'claude-sonnet-4-5-20241022')
            ->withSystemPrompt(<<<PROMPT
                Answer the question based on the provided context.
                If the context doesn't contain the answer, say so.
                Always cite which source document you used.
            PROMPT)
            ->withPrompt("Context:\n{$context}\n\nQuestion: {$question}")
            ->asText();
 
        return $response->text;
    }
 
    private function vectorSearch(array $embedding, int $limit): Collection
    {
        // Using pgvector extension with Eloquent
        return Document::query()
            ->selectRaw("*, embedding <=> ? AS distance", [
                json_encode($embedding)
            ])
            ->orderBy('distance')
            ->limit($limit)
            ->get();
    }
}

Error Handling & Resilience

Production AI features need robust error handling:

class ResilientAgentService
{
    public function respond(string $message): string
    {
        try {
            return $this->primaryAgent($message);
        } catch (RateLimitException $e) {
            // Fall back to a cheaper model
            Log::warning('Primary model rate limited, falling back', [
                'retry_after' => $e->retryAfter,
            ]);
            return $this->fallbackAgent($message);
        } catch (TimeoutException $e) {
            Log::error('AI agent timeout', ['message' => $message]);
            return "I'm experiencing delays. Let me connect you with a team member.";
        } catch (\Throwable $e) {
            Log::error('AI agent failed', [
                'error' => $e->getMessage(),
                'message' => $message,
            ]);
            return "I'm having trouble processing your request. " .
                   "Please try again or contact support@example.com.";
        }
    }
 
    private function primaryAgent(string $message): string
    {
        return Cache::remember(
            "agent_response:" . md5($message),
            now()->addMinutes(30),
            fn () => Prism::text()
                ->using('anthropic', 'claude-sonnet-4-5-20241022')
                ->withPrompt($message)
                ->asText()
                ->text
        );
    }
 
    private function fallbackAgent(string $message): string
    {
        return Prism::text()
            ->using('openai', 'gpt-4o-mini')
            ->withPrompt($message)
            ->asText()
            ->text;
    }
}

Testing AI Agents in Laravel

Testing AI features is tricky. Here's my approach:

class SupportAgentTest extends TestCase
{
    public function test_agent_looks_up_order_when_asked(): void
    {
        // Create test data
        $order = Order::factory()->create(['number' => 'ORD-123']);
 
        // Mock Prism to avoid actual API calls in tests
        Prism::fake([
            new TextResult(
                text: "I found your order ORD-123. It's currently being shipped.",
                toolCalls: [
                    new ToolCall('lookup_order', ['order_number' => 'ORD-123']),
                ],
            ),
        ]);
 
        $service = app(SupportAgentService::class);
        $response = $service->respond(
            conversation: Conversation::factory()->create(),
            message: "Where is my order ORD-123?",
        );
 
        $this->assertStringContains('ORD-123', $response->text);
 
        // Verify the tool was called
        Prism::assertToolCalled('lookup_order', function ($args) {
            return $args['order_number'] === 'ORD-123';
        });
    }
}

Performance Tips

  1. Cache aggressively: If the same question comes in repeatedly, cache the response
  2. Use streaming for chat interfaces: Prism supports streaming — show tokens as they arrive
  3. Queue long-running agents: For complex multi-step agents, dispatch to a queue and use WebSockets to stream results
  4. Choose the right model: Claude Haiku for classification, Sonnet for generation, Opus only when quality demands it
  5. Batch embeddings: When indexing documents, batch embedding requests (Prism supports this)

Conclusion

Laravel + Prism is a legitimate alternative to Python-based AI stacks for many use cases. If your application is already in Laravel and you need to add AI features — chatbots, content generation, data analysis, intelligent routing — you can do it without adding a Python service.

The key advantages: your AI logic shares the same models, services, and database access as your business logic. No serialization boundaries, no cross-service debugging, no extra deployments.

The key limitation: for complex multi-agent orchestration with state machines (like what LangGraph provides), you'll still want a dedicated Python service. But for 80% of AI features, Prism in Laravel is more than enough.


Got questions about building AI features in Laravel? I'm happy to dive deeper into any of these patterns — reach out via the contact section.