I find python's async to be lacking in fine grained control. It may be fine for 95% of simple use cases, but lacks advanced features such as sequential constraining, task queue memory management, task pre-emption etc. The async keword also tends to bubble up through codebases in aweful ways, making it almost impossible to create reasonably decoupled code.
I've been out of the loop for stats for a while, but is there a viable approach for estimating ex ante the number of clusters when creating a GMM? I can think if constructing ex post metrics, i.e using a grid and goodness of fit measurements, but these feel more like brute forcing it
There are Bayesian nonparametric methods that do this by putting a dirichlet process prior on the parameters of the mixture components. Both the prior specification and the computation (MCMC) are tricky, though.
Is there any didactic implementation of the Disruptor / multicast ring available somewhere? I've been curious in working through some practical example to understand the algorithm better.
About me:
Professional Machine Learning / Quant development background (Python, Torch). Experience with low latency software Engineering in C++. Expertise in time series related engineering topics in both ML and software engineering.
Happy to relocate, no need to stay in my field (curious to see new stuff too!)
This is a known problem in generative workflows for AI vids, but solvable. Midjourney recently introduced a feature that does this for stills, and controlnets available for the comfyui ecosystem also can partially solve this, albeit with some hassle. I'm pretty sure if not OpenAI themselves others will follow with their foundation models.
Coming from finance, I always wonder how and if these large pre-trained models are usable on any financial time series. I see the appeal of pre-trained models in areas where there is clearly a stationary pattern, even if its very hidden (i.e industrial or biological metrics). But given the inherently high signal/noise ratio and how extremely non-stationary or chaotic the financial data processes tend to be, i struggle to see the use of pre-trained foundation models.
I played around with timeGPT beta against predicting the sp500 index performance for the next day (not multi variate time series as I couldn't figure out how to get it setup) and trying to use the confidence intervals it generated to buy options was useless at best
I can see chronos working a bit better, as it tries to convert trends, and pieces of time series into tokens, like gpt does for phrases.
Ie. Stock goes down terribly, then dead cat bounces. This is common.
Stock goes up, hits resistance due to existing sell orders, comes down
Stock is on stable upward trend, continues upward trend
If I can verbalize these usual actions, it's likely chronos can also pickup on them.
Once again quality of data trumps all for LLM's, so performance might vary. If you read the paper, they point out a few situations where the LLM is unable to learn a trend, ie. When the prompting time series isn't long enough.
I'm not a 3D artist, but why are we still, for lack of a better word, "stuck" with having / wanting to use simple meshes? I appreciate the simplicity, but isn't this an unnecessary limitation of mesh generation? It feels like an approach that imitates the constraints of having both limited hardware and artist resources. Shouldn't AI models help us break these boundaries?
My understanding is that it's quite hard to make convex objects with radiance fields, right? For example the furniture in OP would be quite problematic.
We can create radiance fields with photogrammetry, but IMO we need much better algorithms for transforming these into high quality triangle meshes that are usable in lower triangle budget media like games.
"Lower triangle budget media" is what I wonder if its still a valid problem. Modern game engines coupled with modern hardware can already render insane number of triangles. It feels like the problem is rather in engines not handling LOD correctly (see city skylines 2), although stuff like UE5 nanite seems to have taken the right path here.
I suppose though there is a case for AI models for example doing what nanite does entirely algorithmically and research like this paper may come in handy there.
I was referring to being stuck with having to create simple / low tri polygonal meshes as opposed to using complex poly meshes such as photogrammetry would provide. The paper specifically addresses clean low poly meshes as opposed to what they call complex iso surfaces created by photogrammetry and other methods
Lots of polys is bad for performance. For a flat object like a table you want that to be low poly. Parallax can also help to give a 3D look without increasing poly count.
What matters more to me usually in a time series context is how forecasting tools deal with non-stationarity of the underlying data process. I'm not an expert on LLMs, but I assume they wont' be the ideal tool to use in these contexts because even finetuning will be rather expensive...?