LangChain Integrations
QvikChat uses LangChain (opens in a new tab) primarily when it comes to building chat endpoints that support retrieval-augmented generation (RAG), i.e., chat endpoints that can generate responses by retrieving relevant data from a data source and then generating a response based on the retrieved data. This allows you to build powerful chat services that can provide accurate and relevant responses to user queries based on the data available in the data source.
LangChain provides support for various integrations that can be used with QvikChat to enhance the functionality of chat endpoints. Some of the key integrations include embedding models, vector stores, and data loaders.
Embedding Models
QvikChat uses the text-embedding-004 (opens in a new tab) model through the Gemini API as the default embedding model for generating embedding for data when using RAG. You can easily use a custom embedding model by simply providing an instance of any LangChain-supported embedding model (opens in a new tab) as the embeddingModel
when configuring an endpoint using the retrieverConfig
or in the configuration parameters of the getDataRetriever
method. To learn more check Embedding Models.
Vector Stores
By default, QvikChat uses an in-memory vector store, but you can easily provide an instance of any hosted vector store supported by LangChain (JS) (opens in a new tab). For more information, check the documentation on Vector Store.
Data Loaders
QvikChat provides built-in support for loading data from text, PDF, JSON, CSV, or a code file of a supported programming language. However, if you want to load a file not supported by QvikChat by default, you can simply use any LangChain-supported data loader (opens in a new tab) to load the data and provide the documents as the docs
property when configuring the retriever. Check the Loading Custom Data section on the data loaders page for more information.