1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
| ### Arguments of Doc_QA class ### """ Args: **embeddings (str, optional)**: the embeddings used in query and vector storage. Defaults to "text-embedding-ada-002".\n **chunk_size (int, optional)**: chunk size of texts from documents. Defaults to 1000.\n **model (str, optional)**: llm model to use. Defaults to "gpt-3.5-turbo".\n **verbose (bool, optional)**: show log texts or not. Defaults to False.\n **language (str, optional)**: the language of documents and prompt, use to make sure docs won't exceed max token size of llm input.\n **search_type (str, optional)**: search type to find similar documents from db, default 'merge'. includes 'merge', 'mmr', 'svm', 'tfidf', also, you can custom your own search_type function, as long as your function input is (query_embeds:np.array, docs_embeds:list[np.array], k:int, relevancy_threshold:float, log:dict) and output is a list [index of selected documents].\n **record_exp (str, optional)**: use aiido to save running params and metrics to the remote mlflow or not if record_exp not empty, and set record_exp as experiment name. default "".\n **system_prompt (str, optional)**: the system prompt that you assign special instruction to llm model, so will not be used in searching relevant documents. Defaults to "".\n **max_input_tokens (int, optional)**: max token length of llm input. Defaults to 3000.\n **temperature (float, optional)**: temperature of llm model from 0.0 to 1.0 . Defaults to 0.0.\n
"""
|