How to Build a Private ChatGPT Using Open-Source Technology? Download our free white paper.
Lettria Knowledge Studio

GraphRAG

We're building the next generation of retrieval augmented generation, merging graph with vector to create something better than both. The goal is to help enterprises adopt RAG tech without the issue of hallucinating chatbot and lack of trust.

Why merge Knowledge Graphs with Vector DBs?

Goodbye Hallucinations

The current Large Language Models lies in the limitations of vector databases, which, despite their capabilities, often lead to data 'hallucinations'.

To address this gap and improve basic LLMs accuracy on specific use cases, RAG has been very helpful, but is currently limited by the usage of Vector DB.

Unlocking their full potential demands context, Knowledge Graphs are built for this.

The best of both worlds

At Lettria, we believe the future lies in the hybridization of two worlds to obtain a faster, more accurate and more contextually-aware solution.

Vector embeddings provide a fast, efficient pre-filtering, narrowing down the search space. Then, the knowledge graph steps in, offering rich context and relationships

Enter Graph RAG

Lettria introduces a revolutionary solution: GraphRAG. By merging the contextual richness of knowledge graphs with the dynamic power of RAG tasks, we provides the context that LLMs need to more accurately answer complex questions.

The result? Precise, relevant, and insightful answers that capture the true essence of your data.

Improved Accuracy, Scalability and Performance

With GraphRAG, the concept of "chat with your data" becomes a reality, transforming data from a static repository to an active, conversational partner.

Your unstructured data become usable and useful, and all your business questions are now answered.

How we do it at Lettria

Here's how unstructured text is turned into a graph

1. Document Import and Parsing

Each document will be carefully cleaned and preprocessed so we can extract text chunks and store metadata.

2. Entity Recognition and Linking

The chunks will be processed through our natural language structuration API to identify entities and relationships between them, and produce a knowledge graph.

3. Embeddings and Vector Management

The chunks will then be vectorised in parallel.

4. Database Merging and Reconciliation

Both the structured output from our NLS API as well as the embeddings will be stored in a single database, ready to power all your RAG applications.

The key to any successful data science project is the data collection phase.

Patrick Duvaut

Head of Innovation

The key to any successful data science project is the data collection phase.

Patrick Duvaut

Head of Innovation

The key to any successful data science project is the data collection phase.

Patrick Duvaut

Head of Innovation

Get started with
Lettria Knowledge Studio
Request Demo ->