Kindles have literally always behaved like that, from the beginning.
The very newest generation of Kindle changed the storage protocol from traditional mass storage (which was compatible with everything) to MTP, which is mildly annoying for Mac users, but it is still intended to just show up as a flash drive.
That does appear to equal exactly 5... would you care to show how it doesn't?
$ cat check_math.c
#include <stdio.h>
int main() {
// Define the values as float (32-bit floating point)
float one_third = 1.0f / 3.0f;
float five = 5.0f;
// Compute the equation
float result = one_third + five - one_third;
// Check for exact equality
if (result == five) {
printf("The equation evaluates EXACTLY to 5.0 (True)\n");
} else {
// Print the actual result and the difference
printf("The equation does NOT evaluate exactly to 5.0 (False)\n");
printf("Computed result: %.10f\n", result);
printf("Difference: %.10f\n", result - five);
}
return 0;
}
$ gcc -O0 check_math.c -o check_math; ./check_math
The equation evaluates EXACTLY to 5.0 (True)
The link at the end is both shortened (for tracking purposes?) and unclickable… so that’s unfortunate. Here is the real link to the paper, in a clickable format: https://dl.acm.org/doi/pdf/10.1145/3385412.3386037
Thanks for pointing that out. It should be fixed now. The shortening was done by the editor I was using ("Buffer") to draft the tweets in - I wasn't intending to track one but it probably does provide some means of seeing how many people clicked the link
"ChatGPT" can mean many things... If you meant the free 4o-mini model, then yes, this outcome is not surprising, since it is bad at basically anything related to code.
If you meant the more powerful o1 or o3-mini models (great naming, openai...), then that outcome would be surprising.
If anyone is interested, the release of Zeta inspired me to write up a blog post this afternoon about LLM tab completions past and future: https://news.ycombinator.com/item?id=43053094
The release of Zed’s Zeta earlier today made me think more deeply about what I think LLM tab completion systems could be. It has been a long time since I’ve written a proper blog post. Hopefully these thoughts are interesting to someone!
Out of the 55,000 people who have starred the repo (and countless others who have downloaded Zed without starring the repo), only 184 people have upvoted that issue. In any project, issues have to be triaged. If someone contributed a fix, the Zed team would likely be interested in merging that... the current attempt does not seem to have fixed it to the satisfaction of the commenters. To put priorities into perspective, issue 7992 appears to be in about 20th place on the list of most-upvoted open issues on the tracker.
I think the takeaway here is not that everyone related to Zed thinks AI should be prioritized over essential features, but that either most developers don't care that much about font rendering or (more likely) most developers have high DPI monitors these days, so this particular bug is just a non-issue for most developers... or else more developers would have upvoted this issue.
I have one low-DPI monitor at home, so I am curious to see this issue for myself. If it looks bad when I get back from vacation in a little over a week, maybe I'll add a thumbs-up to that issue, but low-DPI font rendering isn't the reason I haven't switched to Zed. I haven't switched to Zed because of the reasons mentioned here: https://news.ycombinator.com/item?id=42818890
If those issues were resolved, I would probably just use Zed on high DPI monitors.
So, yes, for me, certain missing "AI"-related features are currently blocking me from switching to Zed. On the other hand, the community is upvoting plenty of non-AI things more than this particular font rendering bug. Unsurprisingly, different people have different priorities.
1. If I make a change, then undo, so that change was never made, it still seems to be in the edit history passed to the model, so the model is interested in predicting that change again. This felt too aggressive... maybe the very last edit should be forgotten if it is immediately undone. Maybe only edits that exist against the git diff should be kept... but perhaps that is too limiting.
2. It doesn't seem like the model is getting enough context. The editor would ideally be supplying the model with type hints for the variables in the current context, and based on those type hints being put into the context, it would also pull in some type definitions. (I was testing this on a Go project.) As it is, the model was clearly doing the best it could with the information available, but it needed to be given more information. Related, I wonder if the prediction could be performed in a loop. When the model suggests some code, the editor could "apply" that change so that the language server can see it, and if the language server finds an error in the prediction, the model could be given the error and asked to make another prediction.
Based on the blogpost, this appears to be hosted remotely on baseten. The model just happens to be released openly, so you can also download it, but the blogpost doesn't talk about any intention to help you run it locally within the editor. (I agree that would be cool, I'm just commenting on what I see in the article.)
On the other hand, network latency itself isn't really that big of a deal... a more powerful GPU server in the cloud can typically run so much faster that it can make up for the added network latency and then some. Running locally is really about privacy and offline use cases, not performance, in my opinion.
If you want to try local tab completions, the Continue plugin for VSCode is a good way to try that, but the Zeta model is the first open model that I'm aware of that is more advanced than just FIM.
I'm stuck using somewhat unreliable starlink to a datacenter ~90ms away, but I can run 7b models fine locally. I agree though, cloud completions aren't unusably slow/unreliable for me, it's mostly about privacy and it being really fun.
I tried continue a few times, I could never get consistent results, the models were just too dumb. That's why I'm excited about this model, it seems like a better approach to inline completion and might be the first okay enough™ model for me. Either way, I don't think I can replace copilot until a model can automatically fine tune itself in the background on the code I've written
> Either way, I don't think I can replace copilot until a model can automatically fine tune itself in the background on the code I've written
I don't think Copilot does this... it's really just a matter of the editor plug-in being smart enough to grab all of the relevant context and provide that to the model making the completions; a form of RAG. I believe organizations can pay to fine-tune Copilot, but it sounds more involved than something that happens automatically.
Depending on when you tried Continue last, one would hope that their RAG pipeline has improved over time. I tried it a few months ago and I thought codegemma-2b (base) acting as a code completion model was fine... certainly not as good as what I've experienced with Cursor. I haven't tried GitHub Copilot in over a year... I really should try it again and see how it is these days.
> I'm aware of the article's context but that just raises further questions. Why invest much effort, as a developer, or as a vendor, in a version of WASM that doesn't even let you run client side? It's carving an ever smaller niche.
Because of the value it can deliver server-side, and that's where most of the value tends to be.
Server-side compute is the core of most companies' revenue streams, yet it really is bloating out of control. Think about how much money is wasted on build pipelines, artifact storage, giant image distribution, multi-tenant workload isolation, supply chain risk mitigation; how expensive cloud infrastructure is, and what a substantial share of it is spent on all of those. With the way WASM was designed, it has the potential to completely upend all of it: tiny binaries, sandboxed runtimes, tightly knit mt, instant scaling, clearly defined contracts, language agnostic microservices. It's a completely different world.
The potential for WASM in enterprise compute is immense - especially with the recent developments in the component model and WASI. We're talking about orders of magnitude improvements here.
Do those "more mature" options support architecture-independent executables written in Rust or Go, so that a company doesn't need to rewrite their existing code in Java?
Containers are a much heavier way to implement a plug-in system, since you will also need to define an RPC of some kind, and they're not architecture-independent unless you require the users to build every container for every architecture. Containers generally aren't a security boundary, but you can wrap them in something like firecracker to help with that. (I believe plugins were an essential part of the context, based on the post we're commenting on, so it is important to evaluate these options against that.)
LLVM IR as it is actually generated is also not architecture independent, and not a security boundary, making it a poor way to do a plugin system. Definitely not more mature for this type of stuff.
.NET MSIL is probably a better fit than the other two options you provided, but not a good one... I don't think Go or Rust compile to MSIL, and MSIL probably isn't a very good security boundary anyways.
I know from past discussions that you don't like WASM. I think you're overly dismissive of it. WASM has been around long enough now that it is a fairly mature system. I haven't personally needed it, but that's simply a comment on my own work experience, not the usefulness of the technology for specific use cases... and I can easily see why people are passionate about WASM. It's not NIH syndrome.
If you want a plugin-in system, maybe don't pick a language with static linking, and an half-backed plugin implementation in first place.
WebAssembly outside of the browser is a solution looking for a problem that has been sorted out multiple times since the idea of bytecode based execution exist, 1958 to be more precise.
Even on the browser its use, besides bringing back the old plugins is debatable, using GPU compute is much better for number crunching, with much better tooling.
Yes… I’m aware of the history. I’m also aware that WASM solves problems that those previous ones didn’t. Otherwise you would have provided an option that actually met the requirements, if there are so many to choose from.
> If you want a plugin-in system, maybe don't pick a language with static linking
Or… a plugin system could “just work”, without placing unnecessary restrictions on what I do.
The right language for a plugin system is the one that attracts the most plugin writers. Supporting many languages is a huge boon for wasm in this department. It's one of the things that makes the JVM and .NET so appealing in the first place, but WASM is better than both combined when it comes to language selection.
It is very popular in gamedev space whenever using C# as a high-performance scripting language.
You are right to say this is low-level though, but low-level scenarios are not as niche for .NET platform as they are for other languages in this category.
The very newest generation of Kindle changed the storage protocol from traditional mass storage (which was compatible with everything) to MTP, which is mildly annoying for Mac users, but it is still intended to just show up as a flash drive.