Increase your RAG accuracy by 30% by joining Lettria's Pilot Program. Request your demo here.

Lettria Update: Improved Document Parsing and a More Configurable Assistant Experience

Boost document automation with Lettria’s new features: smarter parsing, modular Assistants, and enhanced chat for faster, scalable, reliable workflows.

Increase your rag accuracy by 30% with Lettria
In this article

At Lettria, we’re continuing to evolve how businesses interact with complex text data. This release introduces a series of behind-the-scenes improvements designed to make document workflows faster to configure, easier to scale, and more reliable when it comes to extracting structured insights from unstructured sources.

Whether you’re working with messy PDFs, trying to automate information retrieval from multi-format documents, or building a domain-specific chatbot, these new capabilities should reduce setup time and increase control over every part of the process.

More Accurate Document Parsing, Especially for Tables and Complex Layouts

One of the most common friction points we hear from users is related to document formats that aren’t cleanly structured: tables embedded in free text, multi-column pages, inconsistent headings, and so on. We’ve spent the last few weeks addressing this by improving our document parsing pipeline in three key areas:

  • Table recognition and extraction: The platform now does a much better job of identifying tables in PDFs and other formats, even when borders are missing or column alignments are irregular. Extracted data is cleaner and more consistent, making downstream processing (e.g., ingestion or retrieval) more accurate.
  • Support for complex document layouts: We’ve increased our model’s ability to understand pages with multiple columns, embedded sections, or mixed media. This helps maintain logical groupings when text is chunked, which improves relevance during search and question-answering tasks.
  • Smarter chunking based on structure: Instead of relying on fixed-size windows or simple rules, the updated chunking engine now takes into account the actual semantic and structural layout of a document. This means information is split in a way that better reflects how the document is meant to be read.

These updates result in more reliable extraction, fewer manual corrections, and more usable outputs—especially for enterprise teams dealing with documentation at scale.

Improve Your RAG Performance with Graph-Based AI.

Assistants: A New Abstraction Layer to Manage and Customize Workflows

We’re introducing the concept of Assistants—a new way to structure, configure, and manage how text data is processed across different folders and use cases within Lettria.

An Assistant is a reusable, modular setup that can be linked to one or more document folders. Each Assistant has its own configuration and can be tailored to handle specific formats, domains, or tasks. Here’s what you can now control per Assistant:

  • Parsing logic: Choose how documents are broken down, and apply layout-aware parsing strategies based on the file type or domain.
  • Ingestion settings: Define how and where data is stored, and which parts of the document are indexed for search or chat.
  • Retrieval behavior: Customize the parameters that govern how context is selected for each query, including custom retrieval models, filters, and vector strategies.
  • Prompt and code injection: Each Assistant can run with its own prompt templates, and for advanced users, custom code can be injected to fine-tune behavior or post-process results.

This modular approach is especially helpful for teams working across different departments or clients, where each use case requires a slightly different configuration—but where you want to standardize and scale the setup.

Enhanced Chat Mode: Compare Results, Continue Conversations

We’ve also improved the Chat interface to support a more dynamic exploration of your data. The two main additions:

  • Dual Assistant querying: You can now select two Assistants and ask them the same question. Their answers are displayed side by side, making it easy to compare retrieval methods, prompts, or other differences in setup.
  • Follow-up awareness: You can now ask follow-up questions that take into account the previous exchange. This turns the chat into more of an ongoing conversation rather than a series of disconnected queries.

These features are especially useful when fine-tuning Assistants or experimenting with different knowledge extraction strategies. Instead of guessing which configuration works best, you can test and compare them in real time.

Why This Matters

As more organizations rely on documents—contracts, technical specs, product manuals, internal reports—as primary knowledge sources, the ability to extract, structure, and interact with that data becomes a competitive advantage. This release helps close the gap between messy input and usable output, with tools that are built to adapt to real-world complexity.

If you're already using Lettria, these updates are now live and available in your workspace. If you're exploring solutions for document understanding or knowledge automation, this is a good moment to start a conversation.

Get in touch to schedule a live demo →

Callout

Get started with GraphRAG in 2 minutes
Talk to an expert ->