【LangChain】自定义chain

LangChain学习文档

  • Chains(链)
    • 【LangChain】不同的调用方式(Different call methods)
    • 【LangChain】自定义chain

概述

要实现自定义Chain,我们可以使用Chain的子类,并实现它,如下:

内容

from __future__ import annotationsfrom typing import Any, Dict, List, Optionalfrom pydantic import Extrafrom langchain.base_language import BaseLanguageModel
from langchain.callbacks.manager import (AsyncCallbackManagerForChainRun,CallbackManagerForChainRun,
)
from langchain.chains.base import Chain
from langchain.prompts.base import BasePromptTemplate# 继承Chain类
class MyCustomChain(Chain):"""An example of a custom chain."""prompt: BasePromptTemplate"""Prompt object to use."""llm: BaseLanguageModeloutput_key: str = "text"  #: :meta private:class Config:"""Configuration for this pydantic object."""extra = Extra.forbidarbitrary_types_allowed = True# 来自Chain抽象类,必须重写@propertydef input_keys(self) -> List[str]:"""Will be whatever keys the prompt expects.:meta private:"""return self.prompt.input_variables# 来自Chain抽象类,必须重写@propertydef output_keys(self) -> List[str]:"""Will always return text key.:meta private:"""return [self.output_key]# 来自Chain抽象类,必须重写def _call(self,inputs: Dict[str, Any],run_manager: Optional[CallbackManagerForChainRun] = None,) -> Dict[str, str]:# Your custom chain logic goes here# This is just an example that mimics LLMChainprompt_value = self.prompt.format_prompt(**inputs)# Whenever you call a language model, or another chain, you should pass# a callback manager to it. This allows the inner run to be tracked by# any callbacks that are registered on the outer run.# You can always obtain a callback manager for this by calling# `run_manager.get_child()` as shown below.response = self.llm.generate_prompt([prompt_value], callbacks=run_manager.get_child() if run_manager else None)# If you want to log something about this run, you can do so by calling# methods on the `run_manager`, as shown below. This will trigger any# callbacks that are registered for that event.if run_manager:run_manager.on_text("Log something about this run")return {self.output_key: response.generations[0][0].text}async def _acall(self,inputs: Dict[str, Any],run_manager: Optional[AsyncCallbackManagerForChainRun] = None,) -> Dict[str, str]:# Your custom chain logic goes here# This is just an example that mimics LLMChainprompt_value = self.prompt.format_prompt(**inputs)# Whenever you call a language model, or another chain, you should pass# a callback manager to it. This allows the inner run to be tracked by# any callbacks that are registered on the outer run.# You can always obtain a callback manager for this by calling# `run_manager.get_child()` as shown below.response = await self.llm.agenerate_prompt([prompt_value], callbacks=run_manager.get_child() if run_manager else None)# If you want to log something about this run, you can do so by calling# methods on the `run_manager`, as shown below. This will trigger any# callbacks that are registered for that event.if run_manager:await run_manager.on_text("Log something about this run")return {self.output_key: response.generations[0][0].text}@propertydef _chain_type(self) -> str:return "my_custom_chain"from langchain.callbacks.stdout import StdOutCallbackHandler
from langchain.chat_models.openai import ChatOpenAI
from langchain.prompts.prompt import PromptTemplatechain = MyCustomChain(prompt=PromptTemplate.from_template("tell us a joke about {topic}"),llm=ChatOpenAI(),
)chain.run({"topic": "callbacks"}, callbacks=[StdOutCallbackHandler()])""" > Entering new MyCustomChain chain...Log something about this run> Finished chain.'Why did the callback function feel lonely? Because it was always waiting for someone to call it back!'"""

总结:

  1. 自定义一个类(如:MyCustomChain)并继承Chain类;如:class MyCustomChain(Chain):
  2. 由于Chain是抽象类,需要重装其三个方法:input_keys()output_keys()_call()方法。
  3. 通过MyCustomChain创建chain,在执行run方法运行。

参考地址:

Custom chain


本文来自互联网用户投稿,文章观点仅代表作者本人,不代表本站立场,不承担相关法律责任。如若转载,请注明出处。 如若内容造成侵权/违法违规/事实不符,请点击【内容举报】进行投诉反馈!

相关文章

立即
投稿

微信公众账号

微信扫一扫加关注

返回
顶部