Hacker Newsnew | past | comments | ask | show | jobs | submit | more killcoder's commentslogin

I work on a product for building user interfaces for hardware devices. All the state management is done via incrementally updated, differential DataFlow systems. The interface is defined in code instead of graphically, but I think that's a feature, so that code can be version controlled.

I think there has been evolution in the underlying data computation side of things, but there are still unsolved questions about 'visibility' of graphical node based approaches. A node based editor is easy to write with, hard to read with.


We've been using (the old) Shiki Twoslash for a few years now for our docs pages:

https://electricui.com/docs/components/LineChart

https://electricui.com/docs/operators/aggregations

Our product, Electric UI, is a series of tools for building user interfaces for hardware devices on desktop. It has a DataFlow streaming computation engine for data processing which leans heavily on TypeScript's generics. It's pretty awesome to be able to have examples in our docs that correctly show the types as they flow through the system. I certainly learn tools faster when they have good autocomplete in the IDE. Twoslash helps bring part of that experience earlier in the development process, right to when you're looking at documentation.

Our site is built with GatsbyJS, the docs are a series of MDX files rendered statically, then served via Cloudflare Pages. We use the remark plugins to statically render the syntax highlighting and hover tag information, then some client-side React to display the right tooltips on hover.

We build a Twoslash environment from a tagged commit of our built TypeScript definitions, from the perspective of our default template. The Twoslash snippets as a result have all the required context built in, given they are actual compiled pieces of code. The imports we display in the docs are the actual imports used when compiling from the perspective of a user. It bothers me when docs only give you some snippet of a deeply nested structure, and you don't know where to put it. Even worse when it's untyped JS! Using Twoslash lets us avoid that kind of thing systematically.

The CI system throws errors when our docs snippets "don't compile", which is very helpful in keeping our docs up to date with our product. Nothing worse than incorrect docs!

We use React components extensively, and I'm not really happy with our prop reference tables which use Palintir's Documentalist. Our components are increasingly using complex TypeScript generics to represent behaviour. The benefits in the IDE are huge, but the relatively dumb display of type information in the prop table leaves something to be desired. I'm most likely going to replace the data for those tables with compile-time generated Twoslash queries.

My only complaints have been around absolute speed of compilation, but I haven't dug deep into improving that. I just set up a per snippet caching layer, and once files are cached, individual changes are refreshed quickly. After all, it's invoking the full TypeScript compiler, and that's its biggest feature.

Overall I've been very happy with Twoslash, and I'm looking forward to refactoring to use this successor and Shikiji (the ESM successor to Shiki), hopefully it improves our performance. The new Twoslash docs are look great, a huge improvement on when we started using it.


WebAssembly in something like wasmtime is probably the closest we've got to this. Though I wish WebAssembly supported more (wider) SIMD instructions.


There was a recent post by Voultapher from the sort-research-rs project on Branchless Lomuto Partitioning

https://github.com/Voultapher/sort-research-rs/blob/main/wri...

Discussion here:

https://news.ycombinator.com/item?id=38528452

This post by orlp (creator of Pattern-defeating Quicksort and Glidesort) was linked to in the above post, and I found both to be interesting.


We've been running TS + PnP + VSCode on MacOS throughout the entire lifetime of Yarn, through versions 1 - 4.

The plugins system has been extremely valuable to us, and the hoisting / peer dependency behavior has been consistently correct, where other package managers have caused bugs.


Richard Sutton is now working with John Carmack at Keen Technologies.


Teddy is a SIMD accelerated multiple substring matching algorithm. There's a nice description of Teddy here: https://github.com/BurntSushi/aho-corasick/tree/f9d633f970bb...

It's used in the aho-corasick and regex crates. It now supports SIMD acceleration on aarch64 (including Apple's M1 and M2). There are some nice benchmarks included in the PR demonstrating 2-10x speedups for some searches!


https://archives.sonophase.com/music/1484849167

I'm searching for songs like Anotherclock by Parcels.

The first one is this ambient electronic song, the next is some reggae, which don't feel very similar to Anotherclock to me.

How does it determine acoustic similarity?


it’s fetches the most acoustically similar tracks in the collection, which could be in terms of instrumentation, mood, texture, FX, harmony, rhythm.

We don’t know the exact causes, since it uses continuous similarity in terms of multidimensional deep embeddings.

With Anotherclock, it sounds like it’s finding tracks that have similar acoustic percussion samples, rhythms and plucky guitar strums


The discussion of the previous post is here:

https://news.ycombinator.com/item?id=37545040


We built a Power over Data Line (PoDL) compliant device and power supply as part of a one-month 'sink or swim' approach to designing and testing new hardware, and getting to look at maturity of the 10Base-T1 ecosystem. The board was enclosed a submersible sensor node and field tested at a popular dive reef, SCUBA diving down and mounting it to the jetty.

It was also a nice excuse to get some macro shots of the PCB assembly process, including some nice footage of solder paste melting and the interesting surface tension interactions.

(I can't seem to get the videos to render in a format that iOS Safari will play, if anyone knows the ffmpeg incantation, please let me know, nothing I've tried has worked on my iPhone...)


The amount of expertise that went into this 1 month project is crazy and it's all really cool and well put together.

I don't comprehend how you made no mistakes on the journey after drafting the PCBs and writing drivers. From my POV as a software developer, C has so many pitfalls that it is incomprehensible to me that things will Just Work, especially in the context of something that is meant to run for a very long time and not be "restarted."

Why do sensor things at all? What is the ROI for the person who needs that stuff? I mean this in no derogatory sense, I really admire this work.

But the academics who need something something hardware are either so rich they use something commercial / the paid core or so poor they'll use someone else's refuse or a grad student to do it 10x worse & 10x slower for free. Lab equipment, sensors, whatever.

If it's for an industrial purpose, the ultimate consumer for hardware 2 guys can make is the government, as far as the eye can see. Like the people who have a business stake in e.g. the ocean ecosystem are fishermen, oil people, shippers, whatever, and they're only doing this because of a government regulation or threat thereof or whatever. I view government needs as worthwhile, they are a worthy customer, it's that the ROI is essentially imaginary, it's whatever the payer values government compliance and that can be infinitely large or small.

My background in this is very limited, I didn't take "How to Make," I don't know how to use anything in a fablab, but in an intellectually honest way, the audience for "polished, well working gizmo with bug-free firmware" is 1,000,000x larger when it's a coffee machine than any academic or industrial purpose. Why not make "the perfect espresso machine" or "the perfect bike" or whatever? There are $3m Kickstarters for coffee machines whose #1 actual obstacle to successful execution is writing firmware. There are e-bikes that are 10x expensive or 10x crappier because ultimately it's too challenging to make a single firmware and controller to make disparate commodity parts work together cohesively.

I am not at all raining on this parade, because this little blog post was so mind numbingly impressive; and I'm not saying there aren't 10,000 people toiling on dead-on-arrival consumer hardware, be it Oculus peripherals or connected emotive robots or whole divisions at Google. My question is: why? Why not, with your skills, make a thing and fucking sell it?


> I don't comprehend how you made no mistakes on the journey after drafting the PCBs and writing drivers. From my POV as a software developer, C has so many pitfalls that it is incomprehensible to me that things will Just Work, especially in the context of something that is meant to run for a very long time and not be "restarted."

Process, design and architecture play a larger role in the bugcount than language choice.

I wrote munitions control software in C; many of the systems that would cause loss of human life were written in C for decades.

The recent meme of "if it's written in C it must mean unreliable" is inaccurate - all the most reliable systems, for decades, were written in C.


C is so difficult that you aren’t going to get something that passes a cursory inspection without good process, design and architecture. I strongly suspect that’s why it’s common for C software to be quite reliable.


Not OP but

> I don't comprehend how you made no mistakes on the journey after drafting the PCBs and writing drivers. From my POV as a software developer, C has so many pitfalls that it is incomprehensible to me that things will Just Work, especially in the context of something that is meant to run for a very long time and not be "restarted."

You aren't meant to make no mistakes, just only make recoverable mistakes. In a lot of cases you can rely on your hardware for this. Watchdog Timers are specifically intended for this. You set up a watchdog when you deploy the device and your software has to periodically "pet" the watchdog or the system triggers some action. In practice this is used to verify that the software never gets stuck or else it triggers a recovery/restart sequence and maybe sends out an alert. The end goal shouldn't be bug free but "even with bugs it eventually recovers and keeps working unless the hardware physically dies".

> Why do sensor things at all? What is the ROI for the person who needs that stuff? I mean this in no derogatory sense, I really admire this work.

Once again not the OP but I could see this being useful. They are recording wave patterns on or around a reef. That could be used for modelling how reefs can buffer water conditions (ex: for the purpose of constructing man made analogues) or as part of a greater sensor suite for documenting how "weather" impacts reef ecosystems.

And you would want a system you can deploy and leave unattended for long periods of time since every trip out costs money and depending on what you are specifically researching, simply returning to the site could interfere with/disrupt the experiment.


> I really admire this work.

Thank you for your kind words!

> I don't comprehend how you made no mistakes on the journey after drafting the PCBs and writing drivers.

As jacoblambda said below, it's about making mistakes recoverable, and failure modes graceful. Scott put a hell of a lot of effort into planning and design, making sure in the end we could get what we needed out of the hardware. During the process, there's a constant stream of problems that are fixed or worked around in pursuit of the end goal. One nice thing about this kind of project is the scope is fixed and known, and scope creep won't happen. We can build safe guards for attaining the known scope instead of predicting future scope creep. Originally we wanted a turbidity sensor in there as well, but we just didn't have the time to get it working.

On one of the boards we reflowed, something happened with the solder paste (it was probably a bit old) and it started exploding (in a small way), sending components flying across the board. Scott had to jump in with tweezers and put stuff back in real time on the hot plate. Unfortunately I had the macro shot set up for the other side of the board so I didn't get to capture it.

Shooting the top down macro shots, it would take literally 6+ minutes for the rig to stop wobbling visually in the footage, so every top down shot is 6+ minutes of swaying footage before the action. It took so long we only did a couple of these shots.

We thought it would be very dark underwater, and brought a camera that could do 12800 ISO base, but it turned out it was actually surprisingly bright on the day, and we ended up way down at ISO 640 for the shoot.

We couldn't get the camera to focus underwater when zoomed to a focal length of 70mm, so there's a whole setup of footage that just didn't work. But since everything had already been shot at 35mm, it wasn't a big deal in the scheme of things.

The camera rig was incredibly buoyant underwater, I had to steal weights from our other divers just so I could get neutrally buoyant.

> Why not make "the perfect espresso machine" or "the perfect bike" or whatever? Why not, with your skills, make a thing and fucking sell it?

I think I can sum up my personal drive as "I like making things", essentially no matter what the thing is, be it hardware or software. I saw that "inventor" character in the movies, and that's who I wanted to be growing up. In most engineering jobs you'd probably be happy to spend 20% of your time making stuff, with the other 80% spent in paperwork. We're trying our best to flip those percentages for ourselves.

We've worked on several projects with hardware together over the years. From multicopters, to camera gimbals, to an entire space ship set for a short film. Time and time again, we found ourselves building user interfaces from scratch, and that was amongst the hardest parts. So we figured maybe there's a business here, and we started Electric UI - tooling for engineers to build user interfaces for their products, both during development and for production. Selling software must be easier than hardware, right?

It turns out it's unfathomably difficult to market products effectively to the right audience, especially if you don't have millions of dollars to blow on targeted advertising. We're constantly seeing the success stories of good marketing and we intrinsically don't see the thousands of shots that didn't work out. There's just a lot of luck involved, hitting the right people at the right times, even assuming you have perfect product-market fit.

We figured our best chance is to go back to our roots and just build stuff that we find interesting, and document the process. Hopefully people will come along for the ride, and other engineers out there will see our user interface software and the next time they're building something that could use a UI, they'll think of us.


Might I suggest using an ffmpeg frontend like HandBrake? It has a bunch of presets, the Apple ones will surely work for this.


Interesting write-up with some very nice pictures!

The videos worked for me on my iPhone. Always nice to see a bit of solder reflow :-)


Turns out it might just be my phone, what a weird bug.


my shell history has this in it, but it might have been for android firefox ~ `-c:v vp8 -b:v 2000k -pix_fmt yuv420p`


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: