Hacker Newsnew | past | comments | ask | show | jobs | submit | mhamann's commentslogin

I do really appreciate you taking the time to drop by and leave a comment. But I'm curious...why do you think building agents is so important vs. building more of the "AI infrastructure" (which is really what LlamaFarm is trying to do) that will enable the devs that are building integrated AI systems (including agents).


both are important, but one will make more money and have higher changes of survival. incidentally i have found that agent companies have better infra than the horizontal ai infra as a service companies anyway, and you can intuit why that is. so: you will struggle to get the good customers because the good customers are increasingly finding it affordable to build rather than buy.


Appreciate the pep talk, but let’s not pretend infra and developer experience plays can’t scale. GitLab, HashiCorp, and Vercel all "just built better DX for open source" and somehow ended up billion-dollar companies.

Agents will come and go (and probably run into the same orchestration headaches), but someone still has to build the reliable, open foundation they’ll stand on.


Multiple areas of degradation. Typically, you don't ship a dataset to prod and then never change it. You want the system to continue to learn and improve as new data is available. This can create performance issues as the dataset grows in size. But also, your model's performance in terms of quality can degrade over time if you're not constantly evaluating its responses. This can occur because of new info within RAG, a model swap/upgrade, or changes to prompts. Keeping all of those knives in the air is tricky. We're hoping we can solve a bunch of pain points around this so that reliable AI systems are accessible to anyone.


Yes, our goal is to provide a stable, open source platform on top of the cutting-edge AI tools. We can systematically update dependencies as needed and ensure that outputs meet quality requirements.

We also have plans for eval features in the product so that users can measure the quality of changes over time, whether to their own project configs or actual LlamaFarm updates.

Yes, all that's a bit hand-wavy, I know. :-) But we do recognize the problem and have real ideas on solutions. But execution is everything. ;-)


`lf deploy` here we come!


Oh! Muna looks cool as well! I've just barely glanced at your docs page so far, but I'm definitely going to explore further. One of the biggest issues in the back of our minds is getting models running on a variety of hardware and platforms. Right now, we're just using Ollama with support for Lemonade coming soon. But both of these will likely require some manual setup before deploying LlamaFarm.


We should collab! We prefer to be the underlying infrastructure behind the scenes, and have a pretty holistic approach towards hardware coverage and performance optimization.

Read more:

- https://blog.codingconfessions.com/p/compiling-python-to-run... - https://docs.muna.ai/predictors/ai#inference-backends


This looks awesome. Are you kind of like lemonade? Let's chat - robert@llamafarm.dev


Great point. I can see how you'd land there. Also a great idea! xD

Maybe a better descriptor is "self-sovereign AI?" "Self-hosted AI?"


I think that enterprises and small businesses alike need stuff like this, regardless of whether they're software companies or some other vertical like healthcare or legal. I worked at IBM for over a decade and it was always preferable to start with an open source framework if it fit your problem space, especially for internal stuff. We shipped products with components built on Elastic, Drupal, Express, etc.

You could make the same argument for Kubernetes. If you have the cash and the team, why not build it yourself? Most don't have the expertise or the time to find/train the people who do.

People want AI that works out of the box on day one. Not day 100.


Right...there are lots of ways you could do that. Most of the ways we've seen enabling that sort of thing tend to be programmatic in nature. That's great for some people, but you have to deal with shifting dependencies, sorting out bugs, making sure everything connects properly, etc. Some people will want that for sure, because you do get control over every little piece.

LlamaFarm provides an abstraction over most (eventually all) of those pieces. Something that should work out of the box wherever you deploy it but with various knobs to customize as needed (we're working on an agent to help you with this as well).

In your example (alarm monitoring), I think right now you'd still need to write the agent, but you could use LlamaFarm to deploy an LLM that relied on increasingly accurate examples in RAG and very easily adjust your system prompt.


Cool idea. Thanks for sharing. I was really annoyed by the way Google nerfed the maps timeline stuff last year. Obviously this project is way more ambitious than that, but just goes to show you how little Google cares about the longevity of your data.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: