Performance Optimization

Database Optimization Techniques for Laravel 12 Applications

Master database optimization in Laravel 12. Learn indexing, query optimization, connection pooling, and advanced techniques to boost database performance and scalability.

Muhammad Waqas

Muhammad Waqas

CEO at CentoSquare

|
03-Nov-2025
10 min read
Database Optimization Techniques for Laravel 12 Applications

Introduction

Database performance is often the primary bottleneck in Laravel applications. A slow database query can cascade into poor user experience, increased server load, and higher infrastructure costs. As your application scales, inefficient database operations become exponentially more problematic.

Whether you're building a high-traffic e-commerce platform, a SaaS application, or a content management system, optimizing database operations is crucial for maintaining fast response times and handling concurrent users effectively.

In this guide, we'll explore advanced database optimization techniques for Laravel 12, covering indexing strategies, query optimization, connection management, and monitoring tools. These proven techniques will help you reduce query execution time, minimize database load, and scale your application efficiently.


Understanding Database Bottlenecks

Common Performance Issues

Laravel applications typically face these database challenges:

  • Slow Query Execution – Poorly optimized queries taking seconds to complete
  • High CPU Usage – Missing indexes causing full table scans
  • Connection Pool Exhaustion – Too many simultaneous connections
  • Lock Contention – Transactions blocking each other
  • Memory Overflow – Loading large datasets into memory
  • Inefficient Joins – Complex queries without proper optimization

Measuring Performance

Before optimizing, measure current performance:

use Illuminate\Support\Facades\DB;

DB::enableQueryLog();

// Your code here

$queries = DB::getQueryLog();
foreach ($queries as $query) {
    dump($query['query'], $query['time']);
}

Database Indexing Strategies

1. Understanding Index Types

Indexes dramatically improve query performance by creating data structures for fast lookups:

// Single column index
Schema::table('posts', function (Blueprint $table) {
    $table->index('user_id');
    $table->index('status');
    $table->index('published_at');
});

// Composite index (order matters!)
Schema::table('posts', function (Blueprint $table) {
    $table->index(['status', 'published_at', 'user_id']);
});

// Unique index
Schema::table('users', function (Blueprint $table) {
    $table->unique('email');
    $table->unique(['team_id', 'username']);
});

// Full-text index
Schema::table('posts', function (Blueprint $table) {
    $table->fullText(['title', 'content']);
});

2. Indexing Foreign Keys

Always index foreign key columns:

Schema::create('comments', function (Blueprint $table) {
    $table->id();
    $table->foreignId('post_id')->constrained()->onDelete('cascade');
    $table->foreignId('user_id')->constrained()->onDelete('cascade');
    $table->text('content');
    $table->timestamps();
    
    // Indexes automatically created by foreignId()
    // Manual alternative:
    // $table->index('post_id');
    // $table->index('user_id');
});

3. Composite Index Strategy

Design composite indexes based on query patterns:

// Query: WHERE status = 'published' AND category_id = 5 ORDER BY created_at DESC
Schema::table('posts', function (Blueprint $table) {
    $table->index(['status', 'category_id', 'created_at']);
});

// Query: WHERE user_id = 10 AND status = 'active' AND expires_at > NOW()
Schema::table('subscriptions', function (Blueprint $table) {
    $table->index(['user_id', 'status', 'expires_at']);
});

Index Order Rules:

  1. Equality conditions first (=, IN)
  2. Range conditions second (>, <, BETWEEN)
  3. Sort columns last (ORDER BY)

4. Avoid Over-Indexing

Too many indexes slow down writes:

// ❌ Over-indexed table
Schema::table('posts', function (Blueprint $table) {
    $table->index('user_id');
    $table->index('category_id');
    $table->index('status');
    $table->index(['user_id', 'status']);
    $table->index(['category_id', 'status']);
    $table->index(['user_id', 'category_id', 'status']); // Redundant
});

// ✅ Optimal indexing
Schema::table('posts', function (Blueprint $table) {
    $table->index(['user_id', 'status']);
    $table->index(['category_id', 'status']);
    // Covers most query patterns efficiently
});

Query Optimization Techniques

1. Select Only Necessary Columns

Reduce data transfer and memory usage:

// ❌ Selects all columns (inefficient)
$users = User::where('status', 'active')->get();

// ✅ Select only needed columns
$users = User::where('status', 'active')
    ->select('id', 'name', 'email')
    ->get();

// ✅ With relationships
$posts = Post::with(['author:id,name', 'category:id,name'])
    ->select('id', 'title', 'user_id', 'category_id', 'created_at')
    ->get();

2. Optimize Eager Loading

Solve N+1 problems efficiently:

// ❌ N+1 Query Problem
$posts = Post::all(); // 1 query
foreach ($posts as $post) {
    echo $post->author->name; // N queries
    echo $post->comments->count(); // N queries
}

// ✅ Eager Loading
$posts = Post::with(['author', 'comments'])->get(); // 3 queries

// ✅ Conditional Eager Loading
$posts = Post::with(['comments' => function ($query) {
    $query->where('approved', true)
          ->orderBy('created_at', 'desc')
          ->limit(5);
}])->get();

// ✅ Count Relationships Without Loading
$posts = Post::withCount('comments')->get();
echo $posts[0]->comments_count; // No additional query

3. Use Chunking for Large Datasets

Process records in batches to avoid memory exhaustion:

// ❌ Loads all records into memory
User::where('status', 'inactive')->get()->each(function ($user) {
    $user->delete();
});

// ✅ Process in chunks of 1000
User::where('status', 'inactive')->chunk(1000, function ($users) {
    foreach ($users as $user) {
        $user->delete();
    }
});

// ✅ Lazy loading with cursor (best for read-only)
User::where('status', 'inactive')->cursor()->each(function ($user) {
    // Process without loading all into memory
});

// ✅ Chunk by ID (safe with deletions)
User::where('status', 'inactive')->chunkById(1000, function ($users) {
    foreach ($users as $user) {
        $user->delete();
    }
});

4. Optimize WHERE Clauses

Use indexes effectively:

// ❌ Function on indexed column prevents index usage
User::whereRaw('YEAR(created_at) = ?', [2024])->get();

// ✅ Use range conditions
User::whereBetween('created_at', [
    '2024-01-01 00:00:00',
    '2024-12-31 23:59:59'
])->get();

// ❌ OR conditions can be slow
Post::where('status', 'published')
    ->orWhere('status', 'scheduled')
    ->get();

// ✅ Use IN clause
Post::whereIn('status', ['published', 'scheduled'])->get();

// ❌ LIKE with leading wildcard prevents index
User::where('email', 'LIKE', '%@gmail.com')->get();

// ✅ LIKE without leading wildcard uses index
User::where('email', 'LIKE', 'john%')->get();

5. Subquery Optimization

Use efficient subqueries:

// ❌ Multiple queries
$activeUsers = User::where('status', 'active')->pluck('id');
$posts = Post::whereIn('user_id', $activeUsers)->get();

// ✅ Single query with subquery
$posts = Post::whereIn('user_id', function ($query) {
    $query->select('id')
          ->from('users')
          ->where('status', 'active');
})->get();

// ✅ Using Query Builder subqueries
$latestPosts = Post::select('user_id', DB::raw('MAX(created_at) as max_created_at'))
    ->groupBy('user_id');

$users = User::joinSub($latestPosts, 'latest_posts', function ($join) {
    $join->on('users.id', '=', 'latest_posts.user_id');
})->get();

Advanced Query Techniques

1. Raw Queries for Complex Operations

Sometimes raw SQL is more efficient:

// Complex aggregation with grouping
$stats = DB::select("
    SELECT 
        DATE(created_at) as date,
        COUNT(*) as total_orders,
        SUM(total_amount) as revenue,
        AVG(total_amount) as avg_order_value
    FROM orders
    WHERE created_at >= DATE_SUB(NOW(), INTERVAL 30 DAY)
    GROUP BY DATE(created_at)
    ORDER BY date DESC
");

// Bulk upsert
DB::statement("
    INSERT INTO inventory (product_id, quantity, warehouse_id)
    VALUES (1, 100, 1), (2, 50, 1), (3, 75, 2)
    ON DUPLICATE KEY UPDATE 
        quantity = VALUES(quantity),
        updated_at = NOW()
");

2. Database Transactions

Use transactions for data consistency:

use Illuminate\Support\Facades\DB;

DB::transaction(function () {
    $order = Order::create([
        'user_id' => auth()->id(),
        'total' => 100.00,
    ]);

    foreach ($cartItems as $item) {
        OrderItem::create([
            'order_id' => $order->id,
            'product_id' => $item->product_id,
            'quantity' => $item->quantity,
        ]);

        // Decrement inventory
        Product::where('id', $item->product_id)
            ->decrement('stock', $item->quantity);
    }
});

// Manual transaction control
DB::beginTransaction();
try {
    // Your operations
    DB::commit();
} catch (\Exception $e) {
    DB::rollBack();
    throw $e;
}

3. Query Caching

Cache expensive queries:

use Illuminate\Support\Facades\Cache;

// Cache for 1 hour
$popularPosts = Cache::remember('popular_posts', 3600, function () {
    return Post::withCount('views')
        ->orderBy('views_count', 'desc')
        ->limit(10)
        ->get();
});

// Cache with tags (Redis/Memcached only)
$userPosts = Cache::tags(['user:' . $userId, 'posts'])
    ->remember('user_posts:' . $userId, 3600, function () use ($userId) {
        return Post::where('user_id', $userId)->get();
    });

// Clear specific cache tags
Cache::tags(['posts'])->flush();

Connection Pool Optimization

1. Configure Connection Pooling

Optimize config/database.php:

'mysql' => [
    'driver' => 'mysql',
    'host' => env('DB_HOST', '127.0.0.1'),
    'port' => env('DB_PORT', '3306'),
    'database' => env('DB_DATABASE', 'forge'),
    'username' => env('DB_USERNAME', 'forge'),
    'password' => env('DB_PASSWORD', ''),
    'charset' => 'utf8mb4',
    'collation' => 'utf8mb4_unicode_ci',
    'prefix' => '',
    'strict' => true,
    'engine' => 'InnoDB',
    'options' => extension_loaded('pdo_mysql') ? array_filter([
        PDO::MYSQL_ATTR_SSL_CA => env('MYSQL_ATTR_SSL_CA'),
        PDO::ATTR_PERSISTENT => true, // Connection pooling
        PDO::ATTR_TIMEOUT => 5,
        PDO::ATTR_EMULATE_PREPARES => false,
    ]) : [],
],

2. Read/Write Splitting

Configure separate read replicas:

'mysql' => [
    'read' => [
        'host' => [
            '192.168.1.1',
            '192.168.1.2',
        ],
    ],
    'write' => [
        'host' => [
            '192.168.1.3',
        ],
    ],
    'sticky' => true, // Read from write connection after write
    'driver' => 'mysql',
    'database' => env('DB_DATABASE', 'forge'),
    'username' => env('DB_USERNAME', 'forge'),
    'password' => env('DB_PASSWORD', ''),
],

Force specific connections:

// Force read from replica
$users = DB::connection('mysql::read')->select('SELECT * FROM users');

// Force write to primary
DB::connection('mysql::write')->insert('INSERT INTO logs (message) VALUES (?)', ['test']);

Database Schema Optimization

1. Choose Appropriate Data Types

Use the smallest data type that fits your needs:

Schema::create('orders', function (Blueprint $table) {
    $table->id(); // BIGINT UNSIGNED
    $table->foreignId('user_id'); // BIGINT UNSIGNED
    
    // ❌ Using string for status
    // $table->string('status', 50);
    
    // ✅ Use ENUM for fixed values
    $table->enum('status', ['pending', 'processing', 'completed', 'cancelled'])
          ->default('pending');
    
    // ✅ Use DECIMAL for currency
    $table->decimal('total_amount', 10, 2);
    
    // ✅ Use TINYINT for boolean
    $table->boolean('is_paid')->default(false);
    
    // ✅ Use DATE instead of DATETIME when time isn't needed
    $table->date('delivery_date')->nullable();
    
    $table->timestamps();
    
    $table->index(['user_id', 'status']);
});

2. Normalize vs Denormalize

Balance normalization with performance:

// Denormalization for performance
Schema::create('posts', function (Blueprint $table) {
    $table->id();
    $table->string('title');
    $table->text('content');
    $table->foreignId('user_id');
    
    // Denormalized fields for faster queries
    $table->string('author_name'); // Copy from users.name
    $table->unsignedInteger('comments_count')->default(0);
    $table->unsignedInteger('views_count')->default(0);
    
    $table->timestamps();
    
    $table->index(['user_id', 'created_at']);
});

// Update denormalized data
class Post extends Model
{
    protected static function booted()
    {
        static::creating(function ($post) {
            $post->author_name = $post->author->name;
        });
    }
}

3. Partitioning Large Tables

For MySQL 8.0+, use table partitioning:

-- Partition by range (monthly)
CREATE TABLE orders (
    id BIGINT UNSIGNED NOT NULL AUTO_INCREMENT,
    user_id BIGINT UNSIGNED NOT NULL,
    total_amount DECIMAL(10,2),
    created_at TIMESTAMP NOT NULL,
    PRIMARY KEY (id, created_at),
    KEY user_id (user_id)
)
PARTITION BY RANGE (YEAR(created_at) * 100 + MONTH(created_at)) (
    PARTITION p202401 VALUES LESS THAN (202402),
    PARTITION p202402 VALUES LESS THAN (202403),
    PARTITION p202403 VALUES LESS THAN (202404),
    PARTITION pmax VALUES LESS THAN MAXVALUE
);

Monitoring and Profiling

1. Laravel Telescope

Install and configure Telescope:

composer require laravel/telescope
php artisan telescope:install
php artisan migrate

Access at /telescope to see slow queries.

2. Query Logging

Log slow queries in production:

// In AppServiceProvider
use Illuminate\Support\Facades\DB;
use Illuminate\Support\Facades\Log;

public function boot()
{
    DB::listen(function ($query) {
        if ($query->time > 1000) { // Log queries over 1 second
            Log::warning('Slow query detected', [
                'sql' => $query->sql,
                'bindings' => $query->bindings,
                'time' => $query->time,
            ]);
        }
    });
}

3. Database Monitoring Tools

Use external monitoring:

  • MySQL Workbench – Query performance analysis
  • pt-query-digest – Percona Toolkit for MySQL
  • pgAdmin – PostgreSQL monitoring
  • Laravel Pulse – Built-in monitoring

Best Practices Summary

  1. Always index foreign keys and frequently queried columns
  2. Use eager loading to prevent N+1 queries
  3. Select only necessary columns to reduce data transfer
  4. Chunk large datasets to avoid memory issues
  5. Cache expensive queries with appropriate TTL
  6. Use transactions for data consistency
  7. Monitor slow queries and optimize continuously
  8. Choose appropriate data types to save space
  9. Implement read replicas for high-traffic applications
  10. Regular maintenance: analyze tables, optimize indexes

Conclusion

Database optimization is crucial for building scalable Laravel applications. By implementing proper indexing, optimizing queries, using caching strategically, and monitoring performance, you can dramatically improve application speed and handle significantly more traffic.

Key takeaways:

  • Index foreign keys and frequently queried columns
  • Solve N+1 problems with eager loading
  • Use chunking for large datasets
  • Cache expensive queries with Redis
  • Monitor and profile queries regularly
  • Choose appropriate data types and schema design

Need help optimizing your Laravel database? NeedLaravelSite specializes in database optimization and performance tuning. Contact us for expert Laravel development services.


Related Resources:


Article Tags

Laravel database optimization Laravel 12 database performance Laravel query optimization Laravel indexing Laravel Eloquent optimization Laravel database tuning MySQL optimization Laravel PostgreSQL Laravel Laravel connection pooling Laravel database caching Laravel slow queries Laravel N+1 problem Laravel eager loading Laravel composite indexes Laravel query profiling

About the Author

Muhammad Waqas

Muhammad Waqas

CEO at CentoSquare

Founder & CEO at CentoSquare | Creator of NeedLaravelSite | Helping Businesses Grow with Cutting-Edge Web, Mobile & Marketing Solutions | Building Innovative Products