Hacker Newsnew | past | comments | ask | show | jobs | submit | cl42's commentslogin

I hate all the portfolio tracking tools out there + don't understand why tools like FactSet or CapitalIQ cost so much.

... so I'm building an open source version.

Track all your trades in Excel, and get Sharpe ratios, Sortino ratios, or even pass it on to an LLM to have it recommend trades based on news feeds.

Planning to open source it in the next week or two, once I add the proper tests and docs! :)


I'm a huge Malick fan. If you are curious about his very unique style, this 20-minute video outlines why his cinematography is so unique and so powerful: https://www.youtube.com/watch?v=waA3RXy13aA


That's a fascinating point, thanks for sharing. I wonder if prediction markets will 'asymptotically converge' into similar sports betting strategies.


The article talks about how prediction markets' sports books are significantly more profitable. This has less to do with financial structures and more to do with who wants to make bets and where.

According to the article, prediction markets make magnitudes more money on potentially illegal (by today's standards in the US, anyway) sports betting than true event contracts.


That’s a fair point. I agree the current profitability is largely driven by demand patterns and regulatory arbitrage rather than pure market design. My comment was more about why the underlying event-contract model struggles to scale sustainably, even when interest exists.


Have you considered launching your own weather prediction market instead?

Parametric insurance, energy traders, etc could be good markets.


No I haven’t but I think the lack of liquidity as a chicken and egg is a huge barrier to entry in these markets specifically. They are small right now but there are climate derivatives on the Chicago mercantile exchange so this isn’t a new concept I think.

Could you tell me more? https://discord.gg/HPpN42SKQ


Seems like this is the _President_ of the division, so sounds like there's a nontrivially-sized team to manage.


> Background Tasks.

Amazing. If this means no more management of Celery workers, then I am so happy! So nice to have this directly built _into_ Django, especially for very simple task scheduling.


You will have to keep Celery for the foreseeable future. The current implementation is just a stub which provides a unified interface for some future backends.


Background tasks production backend is unfortunately the battery that is not included.

Meanwhile, Huey works just fine: https://huey.readthedocs.io/en/latest/django.html


I remember huey! Glad to see leifer is still maintaining it. I liked it way back when it first came out, was a breathe of fresh air compared to celery


Try RQ.


Do you ever pair trade or hedge your shorts by buying indices? For example, short the quantum stocks but buy NASDAQ index (or call options) in case everything keeps going up?


Hard to say, because most of what I own is indexes. I do explicitly do an inversion of this: Counter my index positions of certain stocks I don't want to own by shorting them in small amounts. So, these shorts are a hedge, vs a stock I think is worthless/fraud like the QC ones.


That’s awesome, thanks!


This Yann LeCun lecture is a nice summary of the conceptual model behind JEPA (+ why he isn't a fan of autoregressive LLMs): https://www.youtube.com/watch?v=yUmDRxV0krg


Is there a summary? Every time I try to understand more about what LeCun is saying all I see are strawmans of LLMs (like claims that LLMs cannot learn a world model or that next token prediction is insufficient for long-range planning). There are lots of tweaks you can do to LLMs without fundamentally changing the architecture, e.g. looped latents, adding additional models as preprocessors for input embeddings (in the way that image tokens are formed)

I can buy that a pure next-token prediction inductive bias for training might be turn out to be inefficient (e.g. there's clearly lots of information in the residual stream that's being thrown away), but it's not at all obvious a priori to me as a layman at least that the transformer architecture is a "dead end"


That's the issue I have with criticism of LLMs.

A lot of people say "LLMs are fundamentally flawed, a dead end, and can never become AGI", but on deeper examination? The arguments are weak at best, and completely bogus at worst. And then the suggested alternatives fail to outperform the baseline.

I think by now, it's clear that pure next token prediction as a training objective is insufficient in practice (might be sufficient in the limit?) - which is why we see things like RLHF, RLAIF and RLVR in post-training instead of just SFT. But that says little about the limitations of next token prediction as an architecture.

Next token prediction as a training objective still allows an LLM to learn an awful lot of useful features and representations in an unsupervised fashion, so it's not going away any time soon. But I do expect to see modified pre-training, with other objectives alongside it, to start steering the models towards features that are useful for inference early on.


You don’t sound like a layman knowing the looped latents and others :)


The criticisms are not strawmans, are actually well grounded on math. For instance, promoting energy based models.

In a probability distribution model, the model is always forced to output a probability for a set of tokens, even if all the states are non sense. In an energy based model, the model can infer that a states makes no sense at all and can backtrack by itself.

Notice that diffusion models, DINO and other successful models are energy based models, or end up being good proxies of the data density (density is a proxy of entropy ~ information).

Finally, all probability models can be thought as energy based, but not all EBM output probabilities distributions.

So, his argument is not against transformers or the architectures themselves, but more about the learned geometry.


I'm really fucking math dumb. Can you explain what the "well grounded" part is, for the mathematically challenged?

Because all I've seen from the "energy based" approach in practice is a lot of hype and not a lot of results. If it isn't applicable to LLMs, then what is it applicable to? Where does it give an advantage? Why would you want it?

I really, genuinely don't get that.


> There is another thing called world models that involves predicting the state of something after some action. But this is a very very limited area of research. My understanding of this is that there just isn't much data of action->reaction.

Folks interested in this can look up Yann LeCun's work on world models and JEPA, which his team at Meta created. This lecture is a nice summary of his thinking on this space and also why he isn't a fan of autoregressive LLMs: https://www.youtube.com/watch?v=yUmDRxV0krg


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: