Performance and Optimization in Geode

Performance optimization is essential for production graph database deployments, especially as data volumes grow and query complexity increases. Geode achieves exceptional performance through a combination of modern systems programming (Zig), advanced networking (QUIC protocol), sophisticated query planning, and comprehensive indexing strategies.

Introduction to Database Performance

Database performance encompasses multiple dimensions:

  • Query Latency: Time to execute individual queries (milliseconds to seconds)
  • Throughput: Queries per second (QPS) the system can sustain
  • Concurrency: Number of simultaneous connections and transactions
  • Scalability: Performance characteristics as data volume grows
  • Resource Efficiency: CPU, memory, disk I/O, and network utilization

Unlike traditional relational databases optimized for tabular data, graph databases must efficiently traverse relationships, often following unpredictable access patterns. Geode addresses these challenges through careful architectural choices and provides comprehensive tools for monitoring and optimization.

Geode’s Performance Architecture

Zero-Cost Abstractions with Zig

Geode is implemented in Zig, a modern systems programming language that provides:

Memory Control: Direct control over allocation with no garbage collection pauses

// Predictable memory allocation
var arena = std.heap.ArenaAllocator.init(allocator);
defer arena.deinit();

const nodes = try arena.allocator().alloc(Node, count);

Compile-Time Optimization: Generic code specialized at compile time without runtime overhead

Zero-Cost Error Handling: Error handling compiled away in success paths

SIMD Support: Vectorized operations for bulk processing

These features enable Geode to achieve performance comparable to C/C++ implementations while maintaining memory safety.

QUIC Protocol Networking

Geode uses QUIC (Quick UDP Internet Connections) instead of traditional TCP:

Benefits over TCP:

  • 0-RTT Connection Resumption: Clients reconnect instantly without handshake
  • Multiplexing Without Head-of-Line Blocking: Multiple queries over single connection
  • Built-in TLS 1.3: Security without performance penalty
  • Better Loss Recovery: Faster recovery from packet loss

Performance Impact:

  • Lower latency for short queries (workload dependent)
  • Higher throughput under packet loss (workload dependent)
  • Connection pooling effectiveness improved

Query Planning and Optimization

Geode’s query planner analyzes GQL queries and generates optimized execution plans:

-- Complex query analyzed by planner
MATCH (u:User {country: 'USA'})-[:PURCHASED]->(p:Product)
WHERE p.price > 100
RETURN u.name, count(p) AS purchase_count
ORDER BY purchase_count DESC
LIMIT 10;

Planner optimizations:

  1. Index Selection: Chooses best index for filtering
  2. Join Ordering: Orders traversals to minimize intermediate results
  3. Predicate Pushdown: Applies filters as early as possible
  4. Cardinality Estimation: Uses statistics to estimate result sizes

Performance Monitoring and Profiling

EXPLAIN: Query Plan Visualization

Use EXPLAIN to see how Geode will execute a query without running it:

EXPLAIN
MATCH (u:User)-[:FOLLOWS]->(friend)-[:LIKES]->(p:Post)
WHERE u.id = $user_id
RETURN p.title, count(friend) AS friend_likes
ORDER BY friend_likes DESC;

Output:

QueryPlan:
  1. IndexSeek: User.id = $user_id (cost: 1)
  2. Expand: (User)-[:FOLLOWS]->(friend) (est. 150 rows)
  3. Expand: (friend)-[:LIKES]->(Post) (est. 3000 rows)
  4. Aggregate: count(friend) GROUP BY p.title (est. 500 groups)
  5. Sort: friend_likes DESC (est. 500 rows)
  6. Limit: 10
Estimated Cost: 3152

Interpreting plans:

  • IndexSeek: Fast lookup using index (best case)
  • IndexScan: Scanning index range
  • NodeScan: Full table scan (avoid if possible)
  • Expand: Relationship traversal
  • Filter: Post-traversal filtering (push down if possible)

PROFILE: Runtime Performance Analysis

PROFILE executes the query and provides detailed timing information:

PROFILE
MATCH (u:User {id: $user_id})-[:PURCHASED*1..3]->(p:Product)
RETURN p.name, count(*) AS purchase_paths;

Output:

Execution Profile:
  IndexSeek (User.id): 0.05ms, 1 row
  VariableLengthExpand: 24.3ms, 1,247 rows
    - Path length 1: 12 rows (0.8ms)
    - Path length 2: 234 rows (8.2ms)
    - Path length 3: 1,001 rows (15.3ms)
  Aggregate: 3.1ms, 87 groups
  Total: 27.45ms

Key metrics:

  • Time per operation: Identify bottlenecks
  • Row counts: Detect unexpected cardinality
  • Cache hit rates: Memory efficiency
  • Index usage: Verify indexes are used

Python Client Profiling

from geode_client import Client
import asyncio

async def profile_query():
    client = Client(host="localhost", port=3141)
    async with client.connection() as conn:
        result, _ = await conn.query("""
            PROFILE
            MATCH (p:Product)
            WHERE p.category = $cat
            RETURN p.name, p.price
        """, {"cat": "Electronics"})

        # Access profiling information
        print(f"Execution time: {result.profile.total_time_ms}ms")
        print(f"Rows returned: {result.profile.rows_returned}")

        for stage in result.profile.stages:
            print(f"{stage.name}: {stage.time_ms}ms ({stage.rows} rows)")

Performance Optimization Techniques

Indexing Strategies

B-Tree Indexes for Equality and Range Queries:

-- Create indexes on frequently filtered properties
CREATE INDEX user_email ON User(email);
CREATE INDEX product_price ON Product(price);
CREATE INDEX order_date ON Order(date);

-- Index usage
MATCH (u:User {email: $email})  -- Uses user_email index
RETURN u;

MATCH (p:Product)
WHERE p.price BETWEEN 100 AND 500  -- Uses product_price index
RETURN p;

Composite Indexes:

-- Index multiple properties together
CREATE INDEX product_category_price ON Product(category, price);

-- Efficient for queries filtering both
MATCH (p:Product {category: 'Electronics'})
WHERE p.price > 100
RETURN p;

Full-Text Search Indexes:

CREATE TEXT INDEX product_description ON Product(description)
WITH (analyzer = 'english', tokenizer = 'standard');

-- Fast text search
MATCH (p:Product)
WHERE text_search(p.description, 'wireless bluetooth speaker')
RETURN p.name, text_rank(p.description, 'wireless bluetooth speaker') AS relevance
ORDER BY relevance DESC;

Vector Indexes (HNSW):

CREATE VECTOR INDEX product_embeddings ON Product(embedding)
WITH (metric = 'cosine', dimensions = 768, m = 16);

-- Fast similarity search
MATCH (p:Product)
WITH p, vector_similarity(p.embedding, $query_vector, 'cosine') AS similarity
WHERE similarity > 0.7
RETURN p.name, similarity
ORDER BY similarity DESC
LIMIT 10;

Query Optimization Patterns

Pattern 1: Filter Before Traversal

-- BAD: Traverse then filter
MATCH (u:User)-[:PURCHASED]->(p:Product)
WHERE u.country = 'USA'
RETURN p.name;

-- GOOD: Filter then traverse
MATCH (u:User {country: 'USA'})-[:PURCHASED]->(p:Product)
RETURN p.name;

Pattern 2: Limit Early in Pipeline

-- BAD: Process all then limit
MATCH (u:User)-[:FRIEND]->(friend)-[:LIKES]->(p:Post)
RETURN p.title
ORDER BY p.created_at DESC
LIMIT 10;

-- GOOD: Limit intermediate results
MATCH (u:User)-[:FRIEND]->(friend)
WITH friend
LIMIT 100  -- Limit before expensive traversal
MATCH (friend)-[:LIKES]->(p:Post)
RETURN p.title
ORDER BY p.created_at DESC
LIMIT 10;

Pattern 3: Avoid Cartesian Products

-- BAD: Creates cartesian product
MATCH (u:User), (p:Product)
WHERE u.preference = p.category
RETURN u.name, p.name;

-- GOOD: Use relationship or index
MATCH (u:User)-[:INTERESTED_IN]->(cat:Category)<-[:IN_CATEGORY]-(p:Product)
RETURN u.name, p.name;

Pattern 4: Use Aggregation Efficiently

-- BAD: Collect all then count
MATCH (u:User)-[:PURCHASED]->(p:Product)
WITH u, collect(p) AS products
RETURN u.name, size(products) AS purchase_count;

-- GOOD: Count directly
MATCH (u:User)-[:PURCHASED]->(p:Product)
RETURN u.name, count(p) AS purchase_count;

Connection Pooling

Connection pooling improves client performance by reusing connections:

Python Client with Pooling:

from geode_client import Client, ConnectionPool

# Create connection pool
pool = ConnectionPool(
    host="localhost",
    port=3141,
    min_connections=5,
    max_connections=50,
    max_idle_time=300  # seconds
)

async def query_with_pool():
    # Reuses existing connection from pool
    async with pool.acquire() as client:
        result, _ = await client.query("MATCH (n:User) RETURN count(n)")
        return result.bindings[0]['count']

# Pool improves throughput; benchmark your workload
for _ in range(10000):
    await query_with_pool()

Go Client Connection Management:

import "geodedb.com/geode"

// Connection pool configured automatically
db, err := sql.Open("geode", "quic://localhost:3141")
if err != nil {
    log.Fatal(err)
}

// Set pool parameters
db.SetMaxOpenConns(50)
db.SetMaxIdleConns(10)
db.SetConnMaxLifetime(time.Minute * 5)

// Queries automatically use pool
rows, err := db.Query("MATCH (u:User) RETURN u.name")

Batch Operations

Batch multiple operations to reduce round-trip latency:

async def insert_users_batch(users):
    client = Client(host="localhost", port=3141)
    async with client.connection() as conn:
        # Batch insert with transaction
        await conn.begin()
        try:
            for batch in chunk(users, 1000):
                await conn.execute("""
                    UNWIND $users AS user_data
                    CREATE (u:User {
                        id: user_data.id,
                        name: user_data.name,
                        email: user_data.email
                    })
                """, {"users": batch})
            await conn.commit()
        except Exception:
            await conn.rollback()
            raise

# Much faster than individual INSERTs
await insert_users_batch(user_list)  # 10,000 users in 2-3 seconds

Performance Best Practices

Schema Design for Performance

  1. Denormalize Strategically: Store frequently accessed computed values
-- Store aggregate counts for fast access
CREATE (u:User {
    name: 'Alice',
    follower_count: 1523,  -- Precomputed
    following_count: 892
});
  1. Use Appropriate Relationship Direction:
-- Model relationships in query direction
-- If you frequently query "who follows X", use:
(follower:User)-[:FOLLOWS]->(followee:User)

-- Not:
(followee:User)-[:FOLLOWED_BY]->(follower:User)
  1. Partition Large Graphs: Use domain-specific nodes as hubs
-- Use year nodes to partition time-series data
(year:Year {year: 2024})-[:HAS_MONTH]->(month:Month {month: 3})
  -[:CONTAINS]->(event:Event {date: '2024-03-15'})

Monitoring and Metrics

Built-in Metrics (exposed via Prometheus):

# Query Geode metrics endpoint
curl http://localhost:9090/metrics

Key metrics to monitor:

  • geode_queries_total: Total queries executed
  • geode_query_duration_seconds: Query latency distribution
  • geode_connections_active: Active client connections
  • geode_cache_hits_total: Cache effectiveness
  • geode_index_seeks_total: Index usage frequency
  • geode_transactions_active: Concurrent transactions

Python monitoring integration:

from prometheus_client import start_http_server, Counter, Histogram

# Custom application metrics
query_counter = Counter('app_geode_queries', 'Geode queries executed')
query_duration = Histogram('app_geode_query_duration', 'Query execution time')

@query_duration.time()
async def execute_query(query, params):
    query_counter.inc()
    client = Client(host="localhost", port=3141)
    async with client.connection() as conn:
        return await conn.execute(query, params)

# Start metrics server
start_http_server(8000)

Capacity Planning

Estimate resource requirements:

Memory:

  • Base: nodes × 256 bytes + relationships × 128 bytes
  • Indexes: 20-50% overhead per indexed property
  • HNSW: vectors × dimensions × 4 × (1 + m × 16)
  • Working set: Keep hot data in RAM

Storage:

  • Graph data: nodes × 512 bytes + rels × 256 bytes
  • WAL: 10-20% of database size
  • Indexes: 30-100% of data size (varies by index type)

CPU:

  • 2-4 cores: Development and testing
  • 8-16 cores: Production workloads (benchmark dependent)
  • 32+ cores: High-throughput workloads (benchmark dependent)

Example calculation (1M nodes, 5M relationships):

  • Memory: ~1.5GB graph + ~2GB indexes = 3.5GB minimum
  • Storage: ~2.5GB graph + ~2GB indexes = 5GB total
  • CPU: 8 cores for moderate workloads; benchmark for your workload

Troubleshooting Performance Issues

Slow Queries

Symptom: Queries taking seconds to complete

Diagnosis:

PROFILE [your slow query]

Common causes:

  1. Missing indexes → Add appropriate index
  2. Cartesian products → Restructure query with relationships
  3. Excessive intermediate results → Add LIMIT clauses
  4. Full table scans → Filter earlier in pipeline

High Memory Usage

Symptom: Server memory consumption growing over time

Diagnosis:

# Check Geode memory stats
curl http://localhost:9090/metrics | grep memory

Solutions:

  1. Tune cache size configuration
  2. Review index memory usage
  3. Close idle connections
  4. Implement result pagination

Connection Pool Exhaustion

Symptom: “Too many connections” errors

Solutions:

# Increase pool size
pool = ConnectionPool(
    max_connections=100,  # Increased from 50
    max_idle_time=600     # Keep connections longer
)

# Or decrease connection timeout in app
async with pool.acquire(timeout=5.0) as client:
    # Query with timeout

Further Reading

  • Geode Performance Guide: /docs/performance/overview/
  • Query Optimization Cookbook: /docs/performance/query-optimization/
  • Index Selection Guide: /docs/indexing/choosing-indexes/
  • Prometheus Integration: /docs/operations/monitoring-with-prometheus/
  • Capacity Planning: /docs/operations/capacity-planning/

Related Articles