Before entering the field of ML, I perceived MLOps as a superhero with abilities to handle and deploy ML models. However, it seems that MLOps is more or less a typical engineer who acquired skills to manage and deploy data infrastructure for ML purposes (exclusively) by exposure to data engineering.
Frankly there's a kernel of truth in here. We've all worked with that one engineer who seems to have a knack for creating chaos and confusion. But let's not forget, it's also important to acknowledge the value of a good -1x engineer - they're the ones who make us appreciate the +1x engineers even more!
Is it worth it to host this on an EC2 which might take ~1.5$ per hour (on demand) than running GPT3.5 API for this purpose?
What is the breakeven number of queries (~2000 tokens/query) to justify the hosting of such model?
You should check how bacterias navigate through mazes. They can navigate through complex environments (chemotaxis and swarming). Bacteria can sense gradients of chemicals and adjust their movements accordingly to find the optimal path through a maze. The ability of bacteria to communicate and coordinate their movements to solve more complex mazes is hella impressive.
I've been a Mullvad user for a while now, and I have to say, their commitment to open source is truly impressive. They're living that philosophy by making their VPN client open source. Tor Browser with the security of a trusted VPN should be an great alternative
Its an alternative implementation which uses interpolated table lookup and requires only 66 bytes of constant memory with an error of less than 0.05% compared to the floating point sine.
The (presumably relative) error of 0.05% implies only ~11 significant bits, while 32-bit and 64-bit floating point formats have 24 and 53 significant bits respectively. If your sine has only 11 correct bits and you are okay with that, why do you have much more than 11 bits of input precision after all?
EDIT: Someone (sorry, the comment was removed before I could cite one) mentioned that it is actually for Q1.15 fixed point. So it was not even an "alternative" implementation...
This is actually impressive. But I feel its very subjective to come up with a product design which is coherent to the rest of the website. I can see this to be useful if I am building some sort of MVP.
Also just curious, Is there a way to import our existing wireframes maybe Sketch/Adobe XD?
Love this, Few things we could add:
- Search Feature
- Way to import/export chats
- Star/Favourite replies by ChatGPT
- For GPT4 provide 8k/32k model variations
- Prompt Dictionary