Hacker Newsnew | past | comments | ask | show | jobs | submit | zainhoda's commentslogin

It would be amazing if the showcase was itself vibe coded. If so, you should showcase it!


actually i did! using Bolt and Supabase

didn't add it but i should, thanks for the idea :)


Oh this disclaimer would be nice!


will add it for sure - thanks!


Mobile app that lets you continue coding while you’re away from your computer.

The goal is to be a full mobile IDE that lets you use Claude Code, Gemini CLI, and other agentic code editors.

Has mobile-native file browsing and git integration.

https://remote-code.com


Oof… coincidentally my task for today is to implement Sign in with Apple.

If you were starting over what would you do differently?


Definitely save the email in your database — even if it’s a private relay address. Also, send a welcome email right after signup, so users can see which email was used and ideally encourage them to update it to a regular one or add an alternative login method (like passwordless email sign-in or OAuth).

If we were starting over, we’d make that update flow more prominent from day one. Apple’s “Hide My Email” sounds harmless until it silently breaks everything later.


Not OP, but customer identity is a component of my work. Ask users for a recovery email and/or phone number to bootstrap identity if sign in with goes sideways.


What's The State Of Elm?

I love Elm but I think it's pretty clear that Evan is effectively declaring it as abandonware because he couldn't figure out a business model to sustain him.

What’s Evan Working On?

Sounds like he's talking to a handful of companies that he knows uses Elm and doing some tinkering without any defined objectives.


I’m waiting for the same signal. There are essentially 2 vastly different states of the world depending on whether GPT-5 is an incremental change vs a step change compared to GPT-4.


Would you be interested in merging with Vanna in some way?

You’re ahead of us in terms of interface but we’re ahead of you in terms of adoption (because of specific choices we’ve made and partnerships we’ve done).


Nice job getting something released! How does this compare to the other similar open source solutions like Vanna AI and DataHerald?


Thank you, we have not done that comparison yet, but we will check these 2 out to learn more. We calculated the accuracy with a test data set which is part of the repo, we will see how can compare this with others.


Looks really interesting! I saw something in the code about streaming. Could you explain that a bit more?


Yep! There's a streaming API. It's a little more technically involved, but you can stream back at the point at which an application "halts", meaning you pause and give control back to the user. It only updates state after the stream completes.

Technical details follow:

You can define an action like this:

    @streaming_action(reads=["prompt"], writes=["prompt"])
    def streaming_chat_call(state: State, **run_kwargs) -> Generator[dict, None, Tuple[dict, State]]:
        client = openai.Client()
        response = client.chat.completions.create(
            ...,
            stream=True,
        )
        buffer = []
        for chunk in response:
            delta = chunk.choices[0].delta.content
            buffer.append(delta)
            yield {'response': delta}
        full_response = ''.join(buffer)
        return {'response': full_response}, state.append(response=full_response)
Then you would call `application.stream_result()` function, which would give you back a container object that you can stream to the user:

    streaming_result_container = application.stream_result(...)
    action_we_just_ran = streaming_result_container.get()
    print(f"getting streaming results for action={action_we_just_ran.name}")

    for result_component in streaming_result_container:
        print(result_component['response']) # this assumes you have a response key in your result

    # get the final result
    final_state, final_result = streaming_result_container.get()
Its nice in a web-server or a streamlit app where you can use streaming responses to connect to the frontend. Here's how we use it in a streamlit app -- we plan for a streaming web server soon: https://github.com/DAGWorks-Inc/burr/blob/main/examples/stre....


I've been looking for something like this! Does it optimize the prompt template for LangChain only or is there a way I can get it to generate a raw system prompt that I can pass to the OpenAI API directly?


Hello, I'm glad you find it useful. I aimed to create something that would serve a purpose. If you can provide me details about use case you are trying to solve, I may add a feature to llmdantic to support it. Right now:

After initialize llmdantic you can get the prompt by running the following command:

""" from llmdantic import LLMdantic, LLMdanticConfig

from langchain_openai import ChatOpenAI

llm = ChatOpenAI()

config: LLMdanticConfig = LLMdanticConfig( objective="Summarize the text", inp_schema=SummarizeInput, out_schema=SummarizeOutput, retries=3, )

llmdantic = LLMdantic(llm=llm, config=config)

input_data: SummarizeInput = SummarizeInput( text="The quick brown fox jumps over the lazy dog." )

prompt: str = llmdantic.prompt(input_data) """

But here you need to provide a langchain llm model. If you do not want to use langchain llm model, you can use the following code:

""" from llmdantic.prompts.prompt_builder import LLMPromptBuilder

from llmdantic.output_parsers.output_parser import LLMOutputParser

output_parser: LLMOutputParser = LLMOutputParser(pydantic_object=SummarizeOutput)

prompt_builder = LLMPromptBuilder( objective="Summarize the text", inp_model=SummarizeInput, out_model=SummarizeOutput, parser=output_parser, )

data: SummarizeInput = SummarizeInput(text="Some text to summarize")

prompt = prompt_builder.build_template()

print(prompt.format(input=data.model_dump())) """

But here still we use langchain for the prompt building. If you any questions, feel free to ask I will be happy to help you.


I think of all the options for “carrots” and “sticks” that companies are offering for RTO, I like this one the best.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: