Hacker Newsnew | past | comments | ask | show | jobs | submit | more coder543's commentslogin

Kindles have literally always behaved like that, from the beginning.

The very newest generation of Kindle changed the storage protocol from traditional mass storage (which was compatible with everything) to MTP, which is mildly annoying for Mac users, but it is still intended to just show up as a flash drive.


That does appear to equal exactly 5... would you care to show how it doesn't?

    $ cat check_math.c
    #include <stdio.h>

    int main() {
        // Define the values as float (32-bit floating point)
        float one_third = 1.0f / 3.0f;
        float five = 5.0f;

        // Compute the equation
        float result = one_third + five - one_third;

        // Check for exact equality
        if (result == five) {
            printf("The equation evaluates EXACTLY to 5.0 (True)\n");
        } else {
            // Print the actual result and the difference
            printf("The equation does NOT evaluate exactly to 5.0 (False)\n");
            printf("Computed result: %.10f\n", result);
            printf("Difference: %.10f\n", result - five);
        }

        return 0;
    }

    $ gcc -O0 check_math.c -o check_math; ./check_math
    The equation evaluates EXACTLY to 5.0 (True)


Okay..that was just an example (and a false one apparently).

Its easy enough to find an example where your typical FP operations doesn't work out.

https://godbolt.org/z/Mr4Ez8xz1


The link at the end is both shortened (for tracking purposes?) and unclickable… so that’s unfortunate. Here is the real link to the paper, in a clickable format: https://dl.acm.org/doi/pdf/10.1145/3385412.3386037


Thanks for pointing that out. It should be fixed now. The shortening was done by the editor I was using ("Buffer") to draft the tweets in - I wasn't intending to track one but it probably does provide some means of seeing how many people clicked the link


"ChatGPT" can mean many things... If you meant the free 4o-mini model, then yes, this outcome is not surprising, since it is bad at basically anything related to code.

If you meant the more powerful o1 or o3-mini models (great naming, openai...), then that outcome would be surprising.


O1 Reasoning. You know, the feature you used to have to pay $200/mo for.


o1 never cost $200/mo. That was o1-pro, which still isn’t available to Plus or Free users.


whatever it is it’s the latest model and the hype is just not real.


If anyone is interested, the release of Zeta inspired me to write up a blog post this afternoon about LLM tab completions past and future: https://news.ycombinator.com/item?id=43053094


The release of Zed’s Zeta earlier today made me think more deeply about what I think LLM tab completion systems could be. It has been a long time since I’ve written a proper blog post. Hopefully these thoughts are interesting to someone!


It doesn't seem like the issue has been entirely ignored: https://github.com/zed-industries/zed/issues/7992#issuecomme...

Out of the 55,000 people who have starred the repo (and countless others who have downloaded Zed without starring the repo), only 184 people have upvoted that issue. In any project, issues have to be triaged. If someone contributed a fix, the Zed team would likely be interested in merging that... the current attempt does not seem to have fixed it to the satisfaction of the commenters. To put priorities into perspective, issue 7992 appears to be in about 20th place on the list of most-upvoted open issues on the tracker.


If font rendering on a text editor is not a priority I wonder what is. It seems to be AI.


And hiding the mouse cursor so you can actually see what you're editing.


¯\_(ツ)_/¯ You can also sort the issues and see for yourself what the community thinks should be a priority: https://github.com/zed-industries/zed/issues?q=is%3Aissue%20...

I think the takeaway here is not that everyone related to Zed thinks AI should be prioritized over essential features, but that either most developers don't care that much about font rendering or (more likely) most developers have high DPI monitors these days, so this particular bug is just a non-issue for most developers... or else more developers would have upvoted this issue.

I have one low-DPI monitor at home, so I am curious to see this issue for myself. If it looks bad when I get back from vacation in a little over a week, maybe I'll add a thumbs-up to that issue, but low-DPI font rendering isn't the reason I haven't switched to Zed. I haven't switched to Zed because of the reasons mentioned here: https://news.ycombinator.com/item?id=42818890

If those issues were resolved, I would probably just use Zed on high DPI monitors.

So, yes, for me, certain missing "AI"-related features are currently blocking me from switching to Zed. On the other hand, the community is upvoting plenty of non-AI things more than this particular font rendering bug. Unsurprisingly, different people have different priorities.


Two immediate issues that I noticed:

1. If I make a change, then undo, so that change was never made, it still seems to be in the edit history passed to the model, so the model is interested in predicting that change again. This felt too aggressive... maybe the very last edit should be forgotten if it is immediately undone. Maybe only edits that exist against the git diff should be kept... but perhaps that is too limiting.

2. It doesn't seem like the model is getting enough context. The editor would ideally be supplying the model with type hints for the variables in the current context, and based on those type hints being put into the context, it would also pull in some type definitions. (I was testing this on a Go project.) As it is, the model was clearly doing the best it could with the information available, but it needed to be given more information. Related, I wonder if the prediction could be performed in a loop. When the model suggests some code, the editor could "apply" that change so that the language server can see it, and if the language server finds an error in the prediction, the model could be given the error and asked to make another prediction.


Based on the blogpost, this appears to be hosted remotely on baseten. The model just happens to be released openly, so you can also download it, but the blogpost doesn't talk about any intention to help you run it locally within the editor. (I agree that would be cool, I'm just commenting on what I see in the article.)

On the other hand, network latency itself isn't really that big of a deal... a more powerful GPU server in the cloud can typically run so much faster that it can make up for the added network latency and then some. Running locally is really about privacy and offline use cases, not performance, in my opinion.

If you want to try local tab completions, the Continue plugin for VSCode is a good way to try that, but the Zeta model is the first open model that I'm aware of that is more advanced than just FIM.


I'm stuck using somewhat unreliable starlink to a datacenter ~90ms away, but I can run 7b models fine locally. I agree though, cloud completions aren't unusably slow/unreliable for me, it's mostly about privacy and it being really fun.

I tried continue a few times, I could never get consistent results, the models were just too dumb. That's why I'm excited about this model, it seems like a better approach to inline completion and might be the first okay enough™ model for me. Either way, I don't think I can replace copilot until a model can automatically fine tune itself in the background on the code I've written


> Either way, I don't think I can replace copilot until a model can automatically fine tune itself in the background on the code I've written

I don't think Copilot does this... it's really just a matter of the editor plug-in being smart enough to grab all of the relevant context and provide that to the model making the completions; a form of RAG. I believe organizations can pay to fine-tune Copilot, but it sounds more involved than something that happens automatically.

Depending on when you tried Continue last, one would hope that their RAG pipeline has improved over time. I tried it a few months ago and I thought codegemma-2b (base) acting as a code completion model was fine... certainly not as good as what I've experienced with Cursor. I haven't tried GitHub Copilot in over a year... I really should try it again and see how it is these days.


> Which will always be the best DX for writing code in the browser.

The article we're all commenting on is not about running WASM in the browser.

WASI in particular may never be supported by browsers.


I'm aware of the article's context but am asking the broader question.


> I'm aware of the article's context but that just raises further questions. Why invest much effort, as a developer, or as a vendor, in a version of WASM that doesn't even let you run client side? It's carving an ever smaller niche.

Because of the value it can deliver server-side, and that's where most of the value tends to be.

Server-side compute is the core of most companies' revenue streams, yet it really is bloating out of control. Think about how much money is wasted on build pipelines, artifact storage, giant image distribution, multi-tenant workload isolation, supply chain risk mitigation; how expensive cloud infrastructure is, and what a substantial share of it is spent on all of those. With the way WASM was designed, it has the potential to completely upend all of it: tiny binaries, sandboxed runtimes, tightly knit mt, instant scaling, clearly defined contracts, language agnostic microservices. It's a completely different world.

The potential for WASM in enterprise compute is immense - especially with the recent developments in the component model and WASI. We're talking about orders of magnitude improvements here.


Just wait until they figure out WASM Application Servers, using serialised WebAssembly for server to server messages, now that would be an idea.


They could call it wRPC or something


Funnily enough, that's what [wRPC](https://github.com/bytecodealliance/wrpc) is designed to do, using the small and efficient [Component Model Value Encoding](https://github.com/WebAssembly/component-model/blob/main/des...), largely based on [Core Wasm spec](https://webassembly.github.io/spec/core/).

For example, here's an example of a Web App using [`wasi:keyvalue` interface](https://github.com/WebAssembly/wasi-keyvalue/) via WebTransport using wRPC: https://github.com/bytecodealliance/wrpc/tree/8e9de3b446ac05...


And on that regard, there are more mature options out there, than reinventing the wheel with WebAssembly.


Do those "more mature" options support architecture-independent executables written in Rust or Go, so that a company doesn't need to rewrite their existing code in Java?


Well, depends if they target LLVM IR, or .NET MSIL.

https://www.graalvm.org/latest/reference-manual/llvm/

https://github.com/FractalFir/rustc_codegen_clr

Also ever heard about containers?

They have this magic feature, you don't need to rewrite anything to run on servers.


Containers are a much heavier way to implement a plug-in system, since you will also need to define an RPC of some kind, and they're not architecture-independent unless you require the users to build every container for every architecture. Containers generally aren't a security boundary, but you can wrap them in something like firecracker to help with that. (I believe plugins were an essential part of the context, based on the post we're commenting on, so it is important to evaluate these options against that.)

LLVM IR as it is actually generated is also not architecture independent, and not a security boundary, making it a poor way to do a plugin system. Definitely not more mature for this type of stuff.

.NET MSIL is probably a better fit than the other two options you provided, but not a good one... I don't think Go or Rust compile to MSIL, and MSIL probably isn't a very good security boundary anyways.

I know from past discussions that you don't like WASM. I think you're overly dismissive of it. WASM has been around long enough now that it is a fairly mature system. I haven't personally needed it, but that's simply a comment on my own work experience, not the usefulness of the technology for specific use cases... and I can easily see why people are passionate about WASM. It's not NIH syndrome.


If you want a plugin-in system, maybe don't pick a language with static linking, and an half-backed plugin implementation in first place.

WebAssembly outside of the browser is a solution looking for a problem that has been sorted out multiple times since the idea of bytecode based execution exist, 1958 to be more precise.

Even on the browser its use, besides bringing back the old plugins is debatable, using GPU compute is much better for number crunching, with much better tooling.


How do I prevent my statically linked plugins from having any filesystem access, or a bug messing with memory it doesn’t control once it’s loaded?


OS IPC and security configuration, no need to add a WebAssembly runtime and compiler toolchain.


So: slow, hard, complex and highly OS specific vs simple and secure.

Right.


Because WASM tooling is so much better, thank goodness for emscripten.


Don’t argue in bad faith, especially if you’ve got nothing substantial to add to the discussion or any real points to make.


Yes… I’m aware of the history. I’m also aware that WASM solves problems that those previous ones didn’t. Otherwise you would have provided an option that actually met the requirements, if there are so many to choose from.

> If you want a plugin-in system, maybe don't pick a language with static linking

Or… a plugin system could “just work”, without placing unnecessary restrictions on what I do.


It starts by choosing the right language for the job.


The right language for a plugin system is the one that attracts the most plugin writers. Supporting many languages is a huge boon for wasm in this department. It's one of the things that makes the JVM and .NET so appealing in the first place, but WASM is better than both combined when it comes to language selection.


That surely isn't any WebAssembly then, if we are counting adoption growth throughout computing history.


I'm not aware of any tech other than wasm that I could be using to implement decent-auth: https://github.com/lastlogin-net/decent-auth

Things like JVM and .NET are great, but not designed to be embedded other languages.


.NET does support embedding:

- As full runtime i.e. hostfxr https://learn.microsoft.com/en-us/dotnet/core/tutorials/netc... (supplementary: https://github.com/StudioCherno/Coral)

- As a dynamic library via DNNE with full runtime: https://github.com/AaronRobinsonMSFT/DNNE

- As either dynamic (easy) or static (hard) native library: https://github.com/dotnet/samples/tree/main/core/nativeaot/N...

There are a few community projects which build on top of these to provide a more seamless integration with other languages.


That's actually really cool, thanks for sharing. Does seem pretty low level and niche though.


It is very popular in gamedev space whenever using C# as a high-performance scripting language.

You are right to say this is low-level though, but low-level scenarios are not as niche for .NET platform as they are for other languages in this category.


You can embed JVM with JNI_CreateJavaVM(): https://docs.oracle.com/javase/7/docs/technotes/guides/jni/s...

It is used used by projects like Postgres PL/Java, LibreOffice and various native Java launchers/wrappers


Dynamic libraries come to mind.


You have to recompile for every architecture and OS, and no security features.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: