將OPENAI_API_KEY=your azure key, OPENAI_API_BASE=your Language API base url, OPENAI_API_TYPE=azure, OPENAI_API_VERSION=2023-05-15 寫於.env檔案並放於你要執行akasha的路徑中。
1 2 3 4 5
## .env file AZURE_API_KEY={your azure key} AZURE_API_BASE={your Language API base url} AZURE_API_TYPE=azure AZURE_API_VERSION=2023-05-15
the account on Hugging Face and the email you use to request access to Meta-Llama must be the same, so that you can download models from Hugging Face once your account is approved.
You should see the Gated model You have been granted access to this model once your account is approved
同樣的,取得huggingface key值後,你可以選擇其中一種方法來匯入key:
1.將KEY放於.env檔案中HF_TOKEN=your api key
1
HF_TOKEN={your api key}
2.設定成環境變數(變數名:HF_TOKEN)
3.在terminal中使用export設定環境變數
1
export HF_TOKEN={your api key}
4.在Python中使用os.environ[‘HF_TOKEN’]=your api key
1 2 3 4 5
#PYTHON3.9 # os.environ['HF_TOKEN']=your api key import akasha ak = akasha.Doc_QA() response = ak.get_response(dir_path, prompt, model="hf:meta-llama/Llama-2-7b-chat-hf")
設定多組不同API KEY
若你需要根據不同模型設定不同的API KEY,在Doc_QA, Model_Eval, Summary Class中皆提供env_file參數,可輸入不同的.env檔來export API KEY 若不存在或為空值,則為預設值(.env)
1 2 3 4 5 6 7 8
import akasha ak = akasha.Doc_QA("openai:gpt-4o", env_file=".env", verbose=True) ak.ask_self("日本本州最大的城市是哪裡?")