Skip to content

Releases: HKUDS/LightRAG

v1.4.8

14 Sep 21:27
Compare
Choose a tag to compare

Import Notes

  1. Introduced a raw data query API /query/data, enabling developers to retrieve complete raw data recalled by LightRAG for fine-grained processing.
  2. Optimized the system to efficiently handle hundreds of documents and hundreds of thousands of ten-relationships in one batch job, resolving UI lag and enhancing overall system stability.
  3. Drop entities with short numeric names that negatively impact performance and query results; dropping names containing only two digits, names shorter than six characters, and names mixed with digits and dots, like 1.1, 12.3, 1.2.3 etc.
  4. Significantly improves the quantity and quality of entity and relation extraction for smaller parameter models, leading to a substantial improvement in query performance
  5. Optimized the prompt engineering for Qwen3-30B-A3B-Instruct and gpt-oss-120b models, incorporating targeted fault tolerance for model outputs.
  6. Implemented max tokens configuration to prevent excessively long or endless output loop during the entity relationship extraction phase for Large Language Model (LLM) responses.
# For vLLM/SGLang doployed models, or most of OpenAI compatible API provider
OPENAI_LLM_MAX_TOKENS=9000

# For Ollama Deployed Modeles
OLLAMA_LLM_NUM_PREDICT=9000

# For OpenAI o1-mini or newer modles
OPENAI_LLM_MAX_COMPLETION_TOKENS=9000

The purpose of setting max tokens parameter is to truncate LLM output before timeouts occur, thereby preventing document extraction failures. This addresses issues where certain text blocks (e.g., tables or citations) containing numerous entities and relationships can lead to overly long or even endless loop outputs from LLMs. This setting is particularly crucial for locally deployed, smaller-parameter models. Max tokens value can be calculated by this formula:
LLM_TIMEOUT * llm_output_tokens/second (i.e. 9000 = 180s * 50 tokens/s)

What's New

  • refact: Enhance KG Extraction with Improved Prompts and Parser Robustness by @danielaskdd in #2032
  • feat: Limit Pipeline Status History Messages to Latest 1000 Entries by @danielaskdd in #2064
  • feature: Enhance document status display with metadata tooltips and better icons by @danielaskdd in #2070
  • refactor: Optimize Entity Extraction for Small Parameter LLMs with Enhanced Prompt Caching by @danielaskdd in #2076 #2072
  • Feature: Add LLM COT Rendering support for WebUI by @danielaskdd in #2077
  • feat: Add Deepseek Sytle CoT Support for Open AI Compatible LLM Provider by @danielaskdd in #2086
  • Add query_data function and /query/data API endpoint to LightRag for retrieval structured response by @tongda in #2036 #2100

What's Fixed

  • Fix: Eliminate Lambda Closure Bug in Embedding Function Creation by @avchauzov in #2028
  • refac: Eliminate Conditional Imports and Simplify Initialization by @danielaskdd in #2029
  • Fix: Preserve Leading Spaces in Graph Label Selection by @danielaskdd in #2030
  • Fix ENTITY_TYPES Environment Variable Handling by @danielaskdd in #2034
  • refac: Enhanced Entity Relation Extraction Text Sanitization and Normalization by @danielaskdd in #2031
  • Fix LLM output instability for <|> tuple delimiter by @danielaskdd in #2035
  • Enhance KG Extraction for LLM with Small Parameters by @danielaskdd in #2051
  • Add VDB error handling with retries for data consistency by @danielaskdd in #2055
  • Fix incorrect variable name in NetworkXStorage file path by @danielaskdd in #2060
  • refact: Smart Configuration Caching and Conditional Logging by @danielaskdd in #2068
  • refactor: Improved Exception Handling with Context-Aware Error Messages by @danielaskdd in #2069
  • fix env file example by @k-shlomi in #2075
  • Increase default Gunicorn worker timeout from 210 to 300 seconds by @danielaskdd in #2078
  • Fix assistant message display with content fallback by @danielaskdd in #2079
  • Prompt Optimization: remove angle brackets from entity and relationship output formats by @danielaskdd in #2082
  • Refactor PostgreSQL Graph Query by Native SQL and Standardized Parameter Passing by @Matt23-star in #2027
  • Update env.example by @rolotumazi in #2091
  • refactor: Optimize Prompt and Fault Tolerance for LLM with Smaller Param LLM by @danielaskdd in #2093

New Contributors

Full Changelog: v1.4.7...v1.4.8

v1.4.8rc9

09 Sep 19:06
9c9d55b
Compare
Choose a tag to compare

What's Changed

  • Prompt Optimization: remove angle brackets from entity and relationship output formats by @danielaskdd in #2082
  • Refactor PostgreSQL Graph Query by Native SQL and Standardized Parameter Passing by @Matt23-star in #2027
  • feat: Add Deepseek Sytle CoT Support for Open AI Compatible LLM Provider by @danielaskdd in #2086

Full Changelog: v1.4.8rc8...v1.4.8rc9

v1.4.8rc8

08 Sep 15:51
569ed94
Compare
Choose a tag to compare

What's New

What's Fixed

New Contributors

Full Changelog: v1.4.8rc4...v1.4.8rc8

v1.4.8rc4

04 Sep 18:46
Compare
Choose a tag to compare

Import Notes

Refactoring Prompt Template and enhancing robust mal-format handling for Knowledge Graph (KG) extraction using Small Parameter Language Models (LLMs). This will invalidate all cached LLM outputs.

What's Changed

  • Fix: Eliminate Lambda Closure Bug in Embedding Function Creation by @avchauzov in #2028
  • refac: Eliminate Conditional Imports and Simplify Initialization by @danielaskdd in #2029
  • Fix: Preserve Leading Spaces in Graph Label Selection by @danielaskdd in #2030
  • Fix ENTITY_TYPES Environment Variable Handling by @danielaskdd in #2034
  • refac: Enhanced Entity Relation Extraction Text Sanitization and Normalization by @danielaskdd in #2031
  • refact: Enhance KG Extraction with Improved Prompts and Parser Robustness by @danielaskdd in #2032
  • Fix LLM output instability for <|> tuple delimiter by @danielaskdd in #2035
  • Enhance KG Extraction for LLM with Small Parameters by @danielaskdd in #2051
  • Add VDB error handling with retries for data consistency by @danielaskdd in #2055
  • Fix incorrect variable name in NetworkXStorage file path by @danielaskdd in #2060

New Contributors

Full Changelog: v1.4.7...v1.4.8rc2

v1.4.7

29 Aug 16:35
0c41be6
Compare
Choose a tag to compare

Important Notes

  • Doc-id based chunk filtering feature is remove from PostgreSQL vector storage.
  • Prompt template has been updated, invalidating all LLM caches
  • The default value of the FORCE_LLM_SUMMARY_ON_MERGE environment variable has been changed from 4 to 8. This adjustment significantly reduces the number of LLM calls during the documentation indexing phase, thereby shortening the overall document processing time.
  • Added support for multiple Rerank Providers (Cohere AI, Jina AI, Aliyun Dashscope). If Rerank was previously enabled, and new env var must be set to enable rerank again:
RERANK_BINDING=cohere
  • Introduced a new environment variable, LLM_TIMEOUT, to specifically control the Large Language Model (LLM) timeout. The existing TIMEOUT variable now exclusively manages the Gunicorn worker timeout. The default LLM timeout is set to 180 seconds. If you previously relied on the TIMEOUT variable for LLM timeout configuration, please update your settings to use LLM_TIMEOUT instead:
LLM_TIMEOUT=180
  • Add comprehensive environment variable settings for OpenAI and Ollama Large Language Model (LLM) Bindings.
    The generic TEMPERATURE environment variable for LLM temperature control has been deprecated. Instead, LLM temperature is now configured using binding-specific environment variables:
# Temperature setting for OpenAI binding
OPENAI_LLM_TEMPERATURE=0.8

# Temperature setting for Ollama binding
OLLAMA_LLM_TEMPERATURE=1.0

To mitigate endless output loops and prevent greedy decoding for Qwen3, set the temperature parameter to a value between 0.8 and 1.0. To disable the model's "Thinking" mode, please refer to the following configuration:

### Qwen3 Specific Parameters depoly by vLLM
# OPENAI_LLM_EXTRA_BODY='{"chat_template_kwargs": {"enable_thinking": false}}'

### OpenRouter Specific Parameters
# OPENAI_LLM_EXTRA_BODY='{"reasoning": {"enabled": false}}'

For a full list of support options, use the following command:

lightrag-server --llm-binding openai --help
lightrag-server --llm-binding ollama --help
lightrag-server --embedding-binding ollama --help
  • A Full list of new env vars added or new default values:
# Timeout for LLM requests (seconds)
LLM_TIMEOUT=180

# Timeout for embedding requests (seconds)
EMBEDDING_TIMEOUT=30

### Number of summary segments or tokens to trigger LLM summary on entity/relation merge (at least 3 is recommended)
FORCE_LLM_SUMMARY_ON_MERGE=8

### Max description token size to trigger LLM summary
SUMMARY_MAX_TOKENS = 1200

### Recommended LLM summary output length in tokens
SUMMARY_LENGTH_RECOMMENDED=600

### Maximum context size sent to LLM for description summary
SUMMARY_CONTEXT_SIZE=12000

### RERANK_BINDING type:  null, cohere, jina, aliyun
RERANK_BINDING=null

### Enable rerank by default in query params when RERANK_BINDING is not null
# RERANK_BY_DEFAULT=True

### chunk selection strategies
###     VECTOR: Pick KG chunks by vector similarity, delivered chunks to the LLM aligning more closely with naive retrieval
###     WEIGHT: Pick KG chunks by entity and chunk weight, delivered more solely KG related chunks to the LLM
###     If reranking is enabled, the impact of chunk selection strategies will be diminished.
KG_CHUNK_PICK_METHOD=VECTOR

### Entity types that the LLM will attempt to recognize
ENTITY_TYPES=["person", "organization", "location", "event", "concept"]

What's News

What''s Fixed

  • Fix ollama stop option handling and enhance temperature configuration by @danielaskdd in #1909
  • Feat: Change embedding formats from float to base64 for efficiency by @danielaskdd in #1913
  • Refact: Optimized LLM Cache Hash Key Generation by Including All Query Parameters by @danielaskdd in #1915
  • Fix: Unify document chunks context format in only_need_context query by @danielaskdd in #1923
  • Fix: Update OpenAI embedding handling for both list and base64 embeddings by @danielaskdd in #1928
  • Fix: Initialize first_stage_tasks and entity_relation_task to prevent empty-task cancel errors by @danielaskdd in #1931
  • Fix: Resolve workspace isolation issues across multiple storage implementations by @danielaskdd in #1941
  • Fix: remove query params from cache key generation for keyword extraction by @danielaskdd in #1949
  • Refac: uniformly protected with the get_data_init_lock for all storage initializations by @danielaskdd in #1951
  • Fixes crash when processing files with UTF-8 encoding error by @danielaskdd in #1952
  • Fix Document Selection Issues After Pagination Implementation by @danielaskdd in #1966
  • Change the status from PROCESSING/FAILED to PENDING at the beginning of document processing pipeline by @danielaskdd in #1971
  • Refac: Increase file_path field length to 32768 and add schema migration for Milvus DB by @danielaskdd in #1975
  • Optimize keyword extraction prompt, and remove conversation history from keyword extraction by @danielaskdd in #1977
  • Fix(UI): Implement XLSX format upload support for web UI by @danielaskdd in #1982
  • Fix: resolved UTF-8 encoding error during document processing by @danielaskdd in #1983
  • Fix: Preserve Document List Pagination During Pipeline Status Changes by @danielaskdd in #1992
  • Update README-zh.md by @OnesoftQwQ in #1989
  • Fi: Added import of OpenAILLMOptions when using azure_openai by @thiborose in #1999
  • fix(webui): resolve document status grouping issue in DocumentManager by @danielaskdd in #2013
  • fix mismatch of 'error' and 'error_msg' in MongoDB by @LinkinPony in #2009
  • Fix UTF-8 Encoding Issues Causing Document Processing Failures by @danielaskdd in #2017
  • docs(config): fix typo in .env comments by @SandmeyerX in #2021
  • fix: adjust the EMBEDDING_BINDING_HOST for openai in the env.example by @pedrofs in #2026

New Contributors

Read more

v1.4.6

03 Aug 18:11
Compare
Choose a tag to compare

What's New

  • feat(performance): Optimize Document Deletion Performance with Entity/Relation Indexing by @danielaskdd in #1904
  • refactor: improve JSON parsing reliability with json-repair library by @danielaskdd in #1897

What's Fixed

Full Changelog: v1.4.5...v1.4.6

v1.4.5

31 Jul 10:20
364ae23
Compare
Choose a tag to compare

What's New

  • Feat(webui): add document list pagination for webui by @danielaskdd in #1886
  • Feat: add Document Processing Track ID Support for Frontend by @danielaskdd in #1882
  • Better prompt for entity description extraction to avoid hallucinations by @AkosLukacs in #1845
  • Refine entity continuation prompt to avoid duplicates and reducing document processing time by @danielaskdd in #1868
  • feat: Add rerank score filtering with configurable threshold by @danielaskdd in #1871
  • Feat(webui): add query param reset buttons to webui by @danielaskdd in #1889
  • Feat(webui): enhance status card with new settings from health endpoint by @danielaskdd in #1873

What's Fixed

New Contributors

Full Changelog: v1.4.4...v1.4.5

v1.4.4

24 Jul 09:12
Compare
Choose a tag to compare

New

Fixed

New Contributors

Full Changelog: v1.4.3...v1.4.4

v1.4.3

19 Jul 04:33
0171e0c
Compare
Choose a tag to compare

New

Fixed

  • Fix: resolved PostgreSQL AGE agtype parsing error and simplify error logging by @danielaskdd in #1802
  • Fix: implemented entity-keyed locks for edge merging operations to ensure robust race condition protection by @danielaskdd in #1811
  • Fix: add retry mechanism for Memgraph transient errors by @danielaskdd in #1810
  • Fix file path handling in graph operations by @danielaskdd in #1796
  • Enhance Redis connection handling with retries and timeouts by @danielaskdd in #1809

New Contributors

Full Changelog: v1.4.2...v1.4.3

v1.4.2

17 Jul 08:47
Compare
Choose a tag to compare

Hotfix: Resolve entity_type and weight problem for Milvus DB

  • Fix Milvus DataNotMatchException by @okxuewei in #1792
  • fix: change default edge weight from 0.0 to 1.0 in entity extraction and graph storage by @danielaskdd in #1794

Full Changelog: v1.4.1...v1.4.2