Gremlin Query Optimization Best Practices: Caching, Indexing, and Efficient Traversals
Unlock the full potential of the Gremlin Query Language by Improving Gremlin query performance optimization &
#8211; into mastering proven optimization techniques such as caching, indexing, and traversal tuning. In data-intensive applications like fraud detection, recommendation systems, and social networks, query speed is mission-critical. Gremlin provides powerful features to streamline graph traversals and reduce execution overhead. Proper use of indexing significantly minimizes vertex and edge scans, improving response times. Smart caching strategies can eliminate redundant computation and enhance query reuse. By optimizing the order and structure of your traversals, you can dramatically boost performance and scalability. This guide dives deep into these core practices, with examples and tips to help you write efficient, production-ready Gremlin queries.Table of contents
- Gremlin Query Optimization Best Practices: Caching, Indexing, and Efficient Traversals
- Introduction to Improving Graph Query Speed in the Gremlin Query Language
- Indexing Strategies in Gremlin for Faster Traversals
- Using Caching to Optimize Gremlin Queries
- Why Do We Need to Improve Graph Query Speed in the Gremlin Query Language?
- 1. Enhancing Real-Time Application Performance
- 2. Supporting Scalability for Large Graph Datasets
- 3. Reducing Infrastructure and Resource Costs
- 4. Improving Developer Productivity and Debugging
- 5. Enabling Complex Query Use Cases
- 6. Ensuring Reliability in Production Environments
- 7. Enhancing User Experience in Interactive Applications
- 8. Facilitating Efficient Integration with External Systems
- Example of Improving Graph Query Speed in the Gremlin Query Language
- Advantages of Optimizing Graph Query Performance in the Gremlin Query Language
- Disadvantages of Optimizing Graph Query Performance in the Gremlin Query Language
- Future Developments and Enhancements in Graph Query Performance for the Gremlin Query Language
- Conclusion
Introduction to Improving Graph Query Speed in the Gremlin Query Language
Performance is a key factor when working with complex graph databases. The Gremlin Query Language offers powerful traversal capabilities, but without optimization, queries can become slow and resource-intensive. In graph-driven applications like fraud detection, recommendation systems, and social analysis, query speed directly impacts user experience and scalability. Improving graph query speed in Gremlin requires smart techniques such as indexing, caching, and efficient traversal design. These methods help reduce traversal time, minimize memory consumption, and boost overall system throughput. Whether you’re using TinkerGraph, JanusGraph, or Amazon Neptune, these best practices apply. In this guide, you’ll explore how to optimize your Gremlin queries for maximum performance and reliability.
What is Graph Query Speed in the Gremlin Query Language?
Graph query speed in the Gremlin Query Language refers to how quickly a traversal executes over a graph dataset. It directly affects the responsiveness and scalability of graph-based applications. Factors like indexing, traversal depth, and filtering efficiency play a key role in query performance. Understanding and improving query speed is essential for building high-performance Gremlin solutions.
Indexing Strategies in Gremlin for Faster Traversals
Indexes can reduce the search space before the traversal begins. In Gremlin (especially with JanusGraph), use:
// Define a mixed index on 'name' property
mgmt = graph.openManagement()
nameProp = mgmt.getPropertyKey('name')
mgmt.buildIndex('byName', Vertex.class).addKey(nameProp).buildMixedIndex('search')
mgmt.commit()
This allows faster lookups when executing:
g.V().has('name', 'Alice')
Proper indexing avoids full scans and drastically improves speed. Plan indexes based on common query patterns and retrieval needs.
Using Caching to Optimize Gremlin Queries
Caching minimizes repeated calculations by storing frequently accessed traversal results. While Gremlin doesn’t have built-in automatic caching like SQL, you can:
- Cache query results at the application layer using a tool like Redis.
- Memoize traversal steps in Gremlin server scripts.
- Avoid recomputation by restructuring traversals:
// Instead of repeating this twice:
def authors = g.V().has('type', 'author').toList()
// Cache it in a variable:
authors.each { println it }
authors.each { doSomething(it) }
Caching is most effective when the underlying graph doesn’t change frequently.
Efficient Traversal Design Patterns
- Use traversal steps wisely to reduce unnecessary computation:
- Filter early: Place
has()
andwhere()
clauses as close to the start as possible. - Avoid over-fetching: Use
limit()
,range()
strategically. - Aggregate smartly: Prefer
groupCount()
over collecting all results in memory.
g.V().has('type', 'user').out('follows').groupCount()
Avoid chaining many out()
or in()
steps unless truly necessary. Consider combining paths using union()
or coalesce()
when applicable.
Profiling Queries with .profile() Step
The .profile()
step is a built-in way to inspect the execution breakdown of a traversal:
g.V().has('type','person').out('knows').profile()
- This returns execution time per step, iteration count, and accessed indexes. Use it to:
- Identify the most time-consuming steps
- Verify if indexes are being used
- Track how deeply the traversal expands
- Profiling is crucial for fine-tuning complex Gremlin queries.
Common Causes of Slow Gremlin Queries
- Lack of Indexes: Querying properties without indexes results in full graph scans.
- Redundant Traversals: Repeating the same traversal logic increases execution time.
- Unbounded Steps:
outE()
,in()
, and similar steps without filtering can return huge datasets. - Misused Filters: Applying filters late in the traversal chain leads to inefficient narrowing.
- Ordering Issues: Sorting before limiting results causes unnecessary memory usage.
Best Practices for Writing Fast Gremlin Queries
- Use indexed fields in
has()
steps - Filter early and limit result set size
- Profile regularly and log metrics
- Avoid overusing
path()
unless necessary - Refactor long chains into modular reusable traversals
Tools and Platforms Supporting Query Optimization
- JanusGraph: Strong support for composite and mixed indexes
- Amazon Neptune: Offers Gremlin explain plans and profiling tools
- TinkerGraph: Lightweight in-memory for testing performance strategies
- Gremlin Console: Ideal for real-time testing and
.profile()
analysis
Why Do We Need to Improve Graph Query Speed in the Gremlin Query Language?
Improving graph query speed in Gremlin is essential for delivering fast, scalable, and responsive applications. As graph datasets grow, unoptimized queries can lead to delays, resource exhaustion, and poor user experience.
1. Enhancing Real-Time Application Performance
In graph-driven applications like fraud detection, social analysis, and recommendation engines, speed is critical. Users expect near-instant results when exploring relationships or patterns. Slow queries degrade the experience and limit application interactivity. By improving query speed, systems can deliver real-time insights efficiently. This is especially important in high-throughput environments. A faster Gremlin query ensures smoother performance and quicker responses.
2. Supporting Scalability for Large Graph Datasets
As graphs grow into millions of vertices and edges, traversal time increases significantly. Unoptimized queries can become bottlenecks that strain CPU and memory resources. Enhancing query speed makes it possible to scale without sacrificing performance. This allows Gremlin to handle growing workloads effectively. Faster queries reduce the risk of timeouts in distributed systems. Ultimately, optimization ensures your graph system remains responsive as data increases.
3. Reducing Infrastructure and Resource Costs
Long-running queries consume more server time, memory, and compute power. This leads to higher cloud infrastructure costs and operational overhead. When queries are optimized, they execute faster and consume fewer resources. This translates to reduced hardware demand and lower hosting expenses. Improving query speed in Gremlin can therefore directly impact your budget. It also enhances the system’s sustainability and efficiency.
4. Improving Developer Productivity and Debugging
Slow queries make it harder to test and iterate during development. Developers spend more time waiting and less time building logic. By improving query speed, development cycles become more agile and test feedback is faster. It also simplifies debugging and performance profiling using tools like .profile()
. Developers can isolate issues quickly and make informed optimization decisions. This improves overall productivity and collaboration.
5. Enabling Complex Query Use Cases
Gremlin is designed for expressive, multi-hop traversals across complex relationships. Without speed optimization, deeply nested queries can become impractical to execute. Improving query speed opens the door for advanced analytics, such as multi-step pattern matching or real-time ranking. These use cases are otherwise limited by performance concerns. A fast query engine empowers developers to fully leverage Gremlin’s expressive power.
6. Ensuring Reliability in Production Environments
In production, unpredictable query delays can disrupt services and frustrate users. Optimized queries run more predictably and avoid random spikes in execution time. This reliability is crucial for building trustworthy applications at scale. Better speed also means fewer timeouts and more consistent service-level agreements (SLAs). Improving query performance strengthens both reliability and user trust. It creates a stable foundation for mission-critical applications.
7. Enhancing User Experience in Interactive Applications
Applications that allow users to explore graphs interactively such as dashboards, analytics platforms, or visual graph browsers—rely on fast query execution. If traversals are slow, it leads to UI lags, page timeouts, or incomplete data visualization. Optimized Gremlin queries help maintain fluid interaction and seamless updates. This keeps users engaged and improves satisfaction. Fast feedback loops are essential for exploratory graph use cases. Enhancing query speed ensures intuitive and real-time interactivity.
8. Facilitating Efficient Integration with External Systems
Graph queries are often part of broader systems involving APIs, data pipelines, or streaming engines. When Gremlin queries are slow, they delay downstream processes and disrupt workflows. Improving query speed enables smooth integration with systems like Apache Kafka, REST APIs, or reporting engines. This helps maintain SLAs across services and supports event-driven architectures. High-performance queries make it easier to embed Gremlin into larger enterprise ecosystems. Faster response time ensures system-wide efficiency and data flow.
Example of Improving Graph Query Speed in the Gremlin Query Language
Optimizing graph queries in Gremlin can significantly reduce execution time and system load. In this example, we’ll demonstrate how to improve a slow traversal using indexing, filtering, and efficient step ordering.
1. Before Optimization – Basic Query (Slow)
g.V().has('userId', 'u123') // Find the user
.out('follows') // Get the people they follow
.out('created') // Get content they created
.has('type', 'post') // Filter for posts
.has('tag', 'AI') // Filter by tag
- No index on
userId
ortag
fields, causing full graph scans. - Traversal starts without restricting result size early.
- No deduplication — results might be redundant.
- Does not leverage edge labels to reduce traversal cost.
2. Filtering Too Late in the Traversal
g.V().out('purchased').has('category', 'electronics').has('price', lt(1000))
Optimized:
g.V().hasLabel('customer').as('c')
.out('purchased')
.has('category', 'electronics')
.has('price', lt(1000))
In the unoptimized version, we start with g.V()
which retrieves all vertices and then traverses out. By starting with hasLabel('customer')
, we narrow the search space early. This reduces traversal volume and improves speed, especially on large graphs.
3. User Followed-Post Traversal Optimization
g.V().has('userId', 'u123')
.out('follows')
.out('posts')
.has('category', 'tech')
.values('title')
No indexing on userId
or category
, unfiltered fan-out, unnecessary chaining.
After Optimization:
// Indexes created beforehand on 'userId' and 'category'
g.V().has('userId', 'u123').as('user')
.out('follows').as('follower')
.out('posts').as('post')
.has('category', 'tech')
.values('title')
- Used indexed properties for early filtering.
- Added
as()
steps for clarity (and potential reuse). - Reduced fan-out by specifying traversal order.
4. Multi-Hop Customer Interaction Lookup
Before Optimization:
g.V().hasLabel('customer')
.out('purchased')
.in('also_purchased')
.out('reviewed')
.values('reviewScore')
- No property filtering; unnecessary multi-hop traversal.
After Optimization:
// Index assumed on 'reviewScore'
g.V().hasLabel('customer')
.out('purchased')
.in('also_purchased')
.out('reviewed')
.has('reviewScore', gt(4))
.limit(5)
.valueMap('productName', 'reviewScore')
- Introduced filtering (
reviewScore > 4
) before fetching values. - Limited results to avoid unnecessary computation.
- Used
valueMap()
for structured, minimal output.
Advantages of Optimizing Graph Query Performance in the Gremlin Query Language
These are the Advantages of Improving Graph Query Performance in the Gremlin Query Language:
- Faster Query Execution Times: Improving query performance reduces the time required to traverse the graph and retrieve results. This is especially crucial in large-scale graphs with millions of nodes and edges. Optimized queries minimize unnecessary steps and data processing. As a result, applications respond more quickly to user actions or backend processes. Faster execution enhances user experience and lowers latency. This leads to smoother, more responsive graph-driven applications.
- Reduced Computational Resource Usage: Efficient Gremlin queries consume less CPU, memory, and I/O resources. When traversals are optimized, fewer operations are performed internally. This reduces the load on the graph engine and allows for better resource management. Lower resource usage means that your system can support more users or concurrent queries. It also helps in cost savings, especially on cloud-based graph databases like AWS Neptune. Resource efficiency translates into scalability and stability.
- Scalability for Larger Graphs: As your graph grows, unoptimized queries can become exponentially slower. Improving performance ensures that your application can handle large datasets without breaking down. Techniques like early filtering, indexing, and traversal profiling make queries scale gracefully. Profiling helps developers anticipate issues before deploying to production. This is key for future-proofing your solution as data and complexity increase. Scalability is crucial for real-time analytics, recommendations, and fraud detection systems.
- Better User Experience in Real-Time Applications: In interactive applications like social networks or dashboards, query performance directly affects user experience. Delays in response times can frustrate users and reduce engagement. Optimizing queries ensures results are returned quickly, even for complex relationships. This is especially important in recommendation engines, search functions, or fraud detection interfaces. A performant Gremlin traversal leads to seamless and intuitive user interactions. Fast, responsive graphs are a competitive advantage.
- Easier Debugging and Maintenance: Optimized queries are typically cleaner, more structured, and easier to understand. With tools like
profile()
andexplain()
, developers can quickly identify inefficient steps. This makes it easier to debug slow queries and maintain traversal logic over time. Better performance often comes with clearer logic and fewer edge cases. Teams spend less time troubleshooting and more time improving features. Clean, performant queries support long-term code quality and team productivity. - Improved Query Reliability and Stability: Optimized queries reduce the likelihood of timeout errors or system crashes during execution. Poorly performing queries can overwhelm the graph engine, especially during peak loads. With well-structured and efficient traversals, operations become more predictable and reliable. This ensures that production systems maintain high availability. Stable queries are critical for applications with strict uptime requirements. Performance tuning helps ensure long-term system health and resilience.
- Lower Operational Costs in Cloud Environments: Many graph databases (e.g., AWS Neptune, Cosmos DB) operate on a pay-as-you-use pricing model. Optimizing query performance means fewer computational cycles, shorter execution times, and lower memory use. This translates directly into reduced cloud costs. Teams running thousands of queries per day benefit greatly from performance tuning. In high-scale systems, even minor improvements can result in significant savings. Optimization is not just technical—it’s also financially strategic.
- Enhanced Developer Productivity: When queries are efficient and well-structured, developers spend less time troubleshooting and tuning. Profiling tools like
profile()
allow quick identification of bottlenecks, making optimization easier. As performance improves, teams can focus on building new features rather than debugging. Streamlined traversals also improve readability and knowledge transfer across teams. In the long run, efficient queries lead to faster development cycles and cleaner codebases. This makes onboarding and maintenance smoother as well. - Support for Real-Time Analytics and Insights: Graph queries are increasingly used in real-time analytics, such as fraud detection or dynamic recommendations. Optimized queries enable near-instantaneous results, which is critical in such use cases. With improved performance, organizations can process and analyze graph data on the fly. This allows decision-making systems to act in real time. Without optimized queries, real-time use cases would be impractical or unreliable. Traversal tuning turns complex graph logic into real-time intelligence.
- Better Alignment with Business SLAs and KPIs: Business-critical applications often have SLAs (Service Level Agreements) for performance and uptime. Optimizing Gremlin queries helps meet response time targets and system throughput goals. This ensures technical performance aligns with business expectations. Meeting these metrics builds trust with stakeholders and improves user satisfaction. Organizations can scale services confidently without fear of performance degradation. Query optimization is directly tied to both technical and business success.
Disadvantages of Optimizing Graph Query Performance in the Gremlin Query Language
These are the Disadvantages of Improving Graph Query Performance in the Gremlin Query Language:
- Increased Query Complexity: Efforts to optimize queries often involve advanced Gremlin constructs like
fold()
,unfold()
,choose()
, orrepeat()
. These can make traversals more complex and harder to understand for beginners. While the query may run faster, it could sacrifice readability and maintainability. New team members may struggle to modify or debug optimized queries. Code reviews also become more difficult with overly dense logic. There’s a balance between performance and simplicity. - More Time Required for Development and Testing: Optimizing a query involves multiple cycles of profiling, tuning, and validation. Developers must test different traversal paths, measure metrics with
profile()
, and ensure accuracy. This process can take significant time, especially with large graphs or dynamic schemas. Time spent optimizing could delay feature development. In agile environments, performance tuning may conflict with tight deadlines. Efficiency gains must be weighed against development timelines. - Risk of Premature Optimization: Focusing on performance too early can lead to unnecessary complexity before it’s even needed. Queries may run well under current data loads, and premature optimization may waste effort. Developers might optimize parts of the query that aren’t real bottlenecks. This can also lead to misleading assumptions and future technical debt. It’s better to first measure and identify real problems using
profile()
before making changes. Always optimize with data-backed insights. - Potential Overhead from Index Management: Using indexes is a key performance technique, but managing them adds operational overhead. Creating and maintaining indexes requires planning and testing. If not managed properly, indexes may consume additional memory or degrade write performance. Incorrect or outdated indexes can even cause slower queries. As data evolves, index strategies must be adjusted. This adds another layer of complexity to system maintenance.
- Tight Coupling to Specific Graph Schema or Dataset: Optimizations are often tailored to a particular dataset structure or query pattern. If the schema changes (e.g., new edge types or property keys), optimized queries may no longer perform well or work at all. This tight coupling reduces flexibility and makes it harder to generalize query logic. Maintenance becomes harder when queries are tightly bound to a fixed graph model. Future-proofing such queries requires extra planning and testing.
- Potential Performance Loss in Small or In-Memory Graphs: In environments like TinkerGraph or small test datasets, optimization efforts might offer no noticeable performance gain. In-memory graphs typically execute traversals quickly without requiring complex tuning. Over-engineering queries for such setups wastes developer effort. Simple traversals might outperform “optimized” versions due to their minimal overhead. Always match your optimization effort to the actual system scale and deployment type.
- Debugging Becomes More Challenging: Optimized queries often use chaining, nesting, and multiple conditionals to reduce traversal time. While efficient, these patterns can obscure logic and make debugging harder. Developers may need to trace complex traversals step by step to locate issues. Errors hidden deep in a traversal chain can go unnoticed. Profiling may show performance gains, but correctness could suffer. Maintaining accuracy while optimizing requires deep knowledge of Gremlin semantics.
- Reduced Readability for Cross-Team Collaboration: Highly optimized queries may be efficient but difficult for others on the team to interpret. This affects collaboration, especially in large or multi-developer environments. Non-experts might avoid modifying performant but opaque queries due to fear of breaking something. Code reviews and documentation become more important when performance is prioritized. Readability is essential for scalability not just in code, but in developer understanding. Optimization should not come at the cost of shared clarity.
- Overfitting Queries to Specific Use Cases: Performance tuning often leads developers to design queries that perfectly fit a known data pattern or workflow. While this improves speed for that specific case, it can reduce flexibility for future changes. Queries may fail or become inefficient with different data volumes, structures, or filtering needs. This overfitting limits reusability and adaptability. Balanced optimization considers performance while maintaining query generality and robustness.
- Hidden Trade-offs in Traversal Strategy Selection: Different optimization strategies (like
repeat()
,barrier()
, or early filtering) can yield different trade-offs. For example, batching may improve speed but increase memory use; early filtering may limit exploratory depth. Without careful analysis, you may improve one metric while unintentionally degrading another. Theprofile()
step helps, but interpreting it correctly requires experience. Optimization decisions should always be made with a full understanding of the traversal’s broader impact.
Future Developments and Enhancements in Graph Query Performance for the Gremlin Query Language
Following are the Future Developments and Enhancements in Graph Query Performance for the Gremlin Query Language:
- Smarter Query Optimizers in Gremlin Engines: Future Gremlin-compatible engines like JanusGraph and Neptune are expected to integrate smarter query optimizers. These enhancements may automatically reorder steps or apply traversal rewrites for better performance. Instead of relying solely on manual tuning, the engine itself could predict the most efficient execution plan. This will benefit developers by reducing the need for deep profiling knowledge. Gremlin will become more accessible to new users. Smarter optimizers will also help ensure consistent performance across varying data sizes.
- Native Support for Parallel and Asynchronous Traversals: Upcoming enhancements in Gremlin implementations may include better parallel and asynchronous traversal support. This would allow multiple branches of a query to execute simultaneously, reducing total query time. Current engines process traversals mostly in a single-threaded or sequential fashion. Native concurrency would significantly boost speed for complex or multi-path queries. As graph data grows, this capability becomes critical. Future updates may unlock parallelism without complex developer intervention.
- Enhanced Profiling and Debugging Tools:Expect future versions of Gremlin consoles and visualization platforms to offer richer profiling tools. Enhancements may include graphical performance dashboards, real-time metrics, and visual path heatmaps. Developers will gain deeper visibility into how queries execute internally. This will simplify bottleneck detection and enable more intuitive tuning. Tooling improvements will bridge the gap between raw performance data and actionable optimization strategies. These enhancements will also benefit debugging in multi-team environments.
- Integration with AI-Powered Query Suggestion Systems: AI and ML are increasingly being integrated into database tooling. In the future, Gremlin IDEs or graph management systems could suggest optimized traversals using AI. Based on your dataset and usage patterns, these tools could recommend filters, index strategies, or traversal rewrites. This would lower the barrier for new users and improve performance with minimal effort. Such systems might also prevent common anti-patterns proactively. AI-assisted query building will be a powerful addition to Gremlin workflows.
- Dynamic Indexing and Adaptive Execution Strategies: Graph engines are likely to adopt dynamic indexing techniques, where indexes are created or adjusted in response to query patterns. Combined with adaptive execution strategies, Gremlin queries could adjust their logic based on data distribution or workload conditions. This means a query might execute differently depending on the state of the graph. These adaptive models can reduce the need for constant manual optimization. It ensures optimal performance as datasets evolve or scale dynamically.
- Better Support for Streaming and Real-Time Graph Updates: With the rise of real-time analytics, future Gremlin platforms may support performance tuning specifically for streaming and event-driven graphs. Enhancements may include lightweight traversal paths, incremental computation, and caching for repeated patterns. This would reduce latency for real-time use cases like fraud detection or live recommendations. Gremlin’s ability to handle high-velocity graph data will be essential for next-gen applications. Performance tuning will evolve to meet these streaming demands.
- Standardization of Benchmarking and Performance Metrics: To promote consistent performance evaluation, the Gremlin ecosystem is expected to adopt standard benchmarking tools and metrics. This will allow developers to compare engines, queries, and optimizations reliably. Standardized tools may also simulate workloads or offer reproducible test suites. These benchmarks will guide users in understanding trade-offs between performance, complexity, and cost. Such transparency will drive improvements across all Gremlin-compatible engines. Standardization will also help with vendor-neutral optimization.
- Optimization APIs and Graph Schema Hints: Future Gremlin environments might support optimization APIs that allow developers to provide execution hints. This could include preferred join order, caching rules, or traversal limits. Combined with schema metadata, engines could apply these hints for query tuning. Developers gain more control while still benefiting from automatic optimization. Such APIs would make performance tuning more declarative. Graph-aware hints would lead to more intelligent and efficient execution paths.
- Cross-Platform Optimization Guidance: As Gremlin becomes more integrated with multi-model systems (like SQL + Graph + Document), unified optimization practices will emerge. Developers could receive platform-specific suggestions depending on whether their queries run on JanusGraph, Neptune, or Cosmos DB. Optimization guidelines will evolve to span across execution engines and graph stores. This will make it easier to write portable, performant queries. Cross-platform guidance will improve migration, scalability, and hybrid deployments.
- Broader Community Contributions and Open-Source Advancements: With increasing adoption, the Gremlin community and open-source contributors will drive innovation in performance optimization. New traversal steps, better default configurations, and optimization extensions may emerge from the community. Documentation and best practices will improve alongside the tools. As graph processing grows in popularity, Gremlin’s performance roadmap will be shaped by its users. Community-led enhancements ensure that improvements reflect real-world use cases and challenges.
Conclusion
Amazon Neptune, when paired with the power of the Gremlin Query Language, offers a robust and scalable solution for working with graph data in the cloud. From basic traversals to complex graph analytics, Gremlin provides the flexibility developers need to navigate and query interconnected datasets efficiently. By understanding how to model your data, execute efficient traversals, and apply performance best practices, you can unlock the full potential of Amazon Neptune for your graph-driven applications.
Whether you’re just starting or scaling to enterprise-level use cases, mastering Gremlin in Neptune ensures you’re prepared for graph workloads in real-world environments. Keep exploring, keep querying, and make the most out of your graph database journey with Amazon Neptune and Gremlin.
Discover more from PiEmbSysTech
Subscribe to get the latest posts sent to your email.