Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been looking for something like this! Does it optimize the prompt template for LangChain only or is there a way I can get it to generate a raw system prompt that I can pass to the OpenAI API directly?


Hello, I'm glad you find it useful. I aimed to create something that would serve a purpose. If you can provide me details about use case you are trying to solve, I may add a feature to llmdantic to support it. Right now:

After initialize llmdantic you can get the prompt by running the following command:

""" from llmdantic import LLMdantic, LLMdanticConfig

from langchain_openai import ChatOpenAI

llm = ChatOpenAI()

config: LLMdanticConfig = LLMdanticConfig( objective="Summarize the text", inp_schema=SummarizeInput, out_schema=SummarizeOutput, retries=3, )

llmdantic = LLMdantic(llm=llm, config=config)

input_data: SummarizeInput = SummarizeInput( text="The quick brown fox jumps over the lazy dog." )

prompt: str = llmdantic.prompt(input_data) """

But here you need to provide a langchain llm model. If you do not want to use langchain llm model, you can use the following code:

""" from llmdantic.prompts.prompt_builder import LLMPromptBuilder

from llmdantic.output_parsers.output_parser import LLMOutputParser

output_parser: LLMOutputParser = LLMOutputParser(pydantic_object=SummarizeOutput)

prompt_builder = LLMPromptBuilder( objective="Summarize the text", inp_model=SummarizeInput, out_model=SummarizeOutput, parser=output_parser, )

data: SummarizeInput = SummarizeInput(text="Some text to summarize")

prompt = prompt_builder.build_template()

print(prompt.format(input=data.model_dump())) """

But here still we use langchain for the prompt building. If you any questions, feel free to ask I will be happy to help you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: