No hooks on the FE side. We use a global lock via a promise. Our API clients are not tied to react in any way.
For all API calls, if the lock is not set, it checks if the JWT is still valid. If it is not, then the lock is set by assigning a new promise to it and saving the resolve call as an external variable to be called after the refresh is done (which resolves the held promise on the other calls, allowing the latest token to be used).
All calls await the lock; it either waits for the refresh to complete or just moves on and performs validation with the currently set token.
Looks like this:
- await on lock; if the lock has been resolved, will just continue on
- Check for JWT validity via exp check (the API server itself would be responsible for checking signature and other validity factors); if not valid, update lock with a new promise and hold the resolver. Perform refresh. Release lock by resolving the promise.
Looking around on Twitter and repos in the OSS community, it appears that Zod is now almost always favored over yup, despite an almost identical API. Curious to hear what people think if they've worked with both. We went with Yup early on at my company, and now that's what we use for consistency in our codebase. I haven't personally found it to be lacking, but some of the logic around nulls and undefined always lead me back to the docs.
My company used Yup initially but we moved to Zod to be able to infer types from schemas. For example, API response payloads are Zod schemas. OpenAPI components are also generated from Zod schemas.
There are some performance issues, and WebStorm is struggling, which forced me over to VS Code.
That is an interesting angle to look at it from. If they're gonna keep pushing this they end up with a strong incentive to make the iPhone even more energy efficient, since users have come to expect good/always improving battery life.
At the end of the day, AI workloads in the cloud will always be a lot more compute effective however, meaning lowered combined footprint. However, in the server based model, there is more incentive to pre-compute (waste inference) things to make them appear snappy on device. Analogous would be all that energy spent doing video encoding for YouTube videos that never get watched. Although, it's "idle" resources for budgeting purposes.
I've been getting a lot of ads for a product with a similar premise ("AI-first Code Reviewer"): CodeRabbit.ai. Can you help me understand how this product compares?
I see that the handful of Ellipsis "buddies" here are upvoting your post. :)
CodeRabbit employees wouldn't usually be commenting here to spoil your "moment," but this reply is entirely wrong on so many levels. The fact is that CR is much further along the traction (several hundred paying customers and thousands of GitHub app installs) and product quality. Most of the CR clones are just copying the CR UX (and it's OSS prompts) at this point, including Ellipsis. The chat feature at CR is also pretty advanced - it even comes with a sandbox environment to execute AI-generated shell commands that help it deep dive into the codebase.
Again, I am sorry that we had to push back on this reply; we usually don't respond to competitors - but this statement was plain wrong, so we had to flag it.
I feel like Dice.fm already solved this. You can buy an untransferable ticket and if you cant go, you simply return it to a wait-list of people who have signed up. You get your money back, and they pay the same price. Maybe there are some transaction fees involved but overall this eliminates the ability for someone to buy just to sell?
Question related to the Chinchilla paper[0], which says that optimal amount of training data for ~500B, 1T, and 10T param models are 11T, 21.2T, 216.2T tokens, respectively. The PaLM paper[1] says it made use of 700B tokens.
How many tokens of training data have humans produced across the entire internet, all our written works, etc? Is there such a thing as a 216 trillion token set?
Humans produce an astonishing amount of text if you consider all the source code, research data, social media websites, emails etc and project out a decade or two; there is also multimodal and RL to consider as a source of 'tokens' like visual tokens, which have ~infinite data. Text is great, but there is no reason to train only text. It's just a good starting point.
But the real question you should be asking is, where would you get the compute to train a model that needs 216t tokens?
No because I think it would be rejected under the rule that the user has to own the host machine and it be on LAN...
See App Store guidelines section 4.2.7:
"The app must only connect to a user-owned host device that is a personal computer or dedicated game console owned by the user, and both the host device and client must be connected on a local and LAN-based network."
So, they can transfer "ownership" to me whenever I use my personal computer that happens to be located in Google's cloud which, through a series of tubes, is in fact connected to my LAN-based network.
With Plex, I can watch movies I didn't buy from Apple. With Kindle, I can read books I didn't buy through Apple. Hell again with Steam Link, I can play games I didn't buy through Apple.
This is just pure monopolistic behavior, nothing less.
I hope the excellent open source Moonlight doesn’t get banned. It reverse engineered the Nvidia Shield (now a TV box a la Roku) features and allows really high quality low latency gaming from devices if you have an nvidia video card. What’s even better is it works without any outside internet required, just your LAN, which is a rare thing these days.
Nope. This is about limiting apps being distributed by an alternative App Store where the content cannot be reviewed that it meets non-technical Apple guidelines. Remote Screen Control apps and things like Steam Link are different because the content comes from the user's hardware and data, not from the creator of the app.
And I really don't see how there's any sort of difference whether the game console that's streaming the game is owned by the person or sits in some datacenter somewhere. It's practically the same experience from the user's perspective.
Disclaimer: MSFT employee, not in Xbox, all views are my own, etc.
Steam Link wasn't different, they spent a year being rejected from the app store and eventually removed all store functionality when streaming your desktop.
I found this article very fascinating. I'm sad to admit that I know very little about the large number thinkers from that time though. If anyone could point me in the direction of some literature that would introduce me to all this, that would be greatly appreciated!
I'd recommend a couple of classics, both accessible to non-specialists:
The Great Chain of Being is a nice overview of some of the main themes of ancient metaphysics and their later influence.[1] It ranges far beyond the ancient world, but it does as good a job as anything can of showing how ancient theories that might easily seem conceptually alien could in fact have been rational.
Shame and Necessity is about what you might call the ethical mindset of the ancient world.[2] The general aim is to explore how the ways Greeks and Romans engaged with moral questions systematically differed from what ethics would become after the advent of Christianity. (I can't praise this book enough. Williams was insanely erudite and analytically sharp.)
There's a nice podcast[1] by the King's College in London about philosophy. It's not going deep on any particular philosopher, bar Plato, but will give you an overview of ancient and modern Philosophy. Then you can decide were you'd like to dig more.
My layman's suggestion is that, rather than trying to read the ancients, start with how ideas from them and more current philosophers are relevant in today's debates. This is akin to diving into using a prog language with little understanding rather than reading books about it, it's more fun!