Hacker Newsnew | past | comments | ask | show | jobs | submit | shanemhansen's commentslogin

You nerd sniped me a little and I'll admit I'm not 100% sure what a reduction is but I've understood it to be a measurement of work for scheduling purposes.

A bit of googling indicates that actually you can use performance monitoring instur to generate an interrupt every n instructions. https://community.intel.com/t5/Software-Tuning-Performance/H...

Which is part of the solution. Presumably the remainder of the solution is then deciding what to schedule next in a way that matches erlang.

Disclaimer: this is based off some googling that makes it seem like hardware support the desired feature exists, not any actual working code.


Oh that's a really neat find. I'm not sure how 'instructions' map to 'reductions' in the sense that if you stop when a reduction is completed the system is in a fairly well defined state so you can switch context quickly, but when you stop in mid reduction you may have to save a lot more state. The neat thing about the BEAM is that it is effectively a perfect match for Erlang and any tricks like that will almost certainly come with some kind of price tag attached. An interrupt is super expensive compared to a BEAM context switch to another thread of execution, you don't see the kernel at all, it is the perfect balance between cooperative and preemptive multitasking. You can pretend it is the second but under the hood it is the first, the end result is lightning fast context switches.

But: great find, I wasn't aware of this at all and it is definitely an intriguing possibility.


The unreasonable effectiveness of profiling and digging deep strikes again.


The biggest tool in the performance toolbox is stubbornness. Without it all the mechanical sympathy in the world will go unexploited.

There’s about a factor of 3 improvement that can be made to most code after the profiler has given up. That probably means there are better profilers than could be written, but in 20 years of having them I’ve only seen 2 that tried. Sadly I think flame graphs made profiling more accessible to the unmotivated but didn’t actually improve overall results.


I think the biggest tool is higher expectations. Most programmers really haven't come to grips with the idea that computers are fast.

If you see a database query that takes 1 hour to run, and only touches a few gb of data, you should be thinking "Well nvme bandwidth is multiple gigabytes per second, why can't it run in 1 second or less?"

The idea that anyone would accept a request to a website taking longer than 30ms, (the time it takes for a game to render it's entire world including both the CPU and GPU parts at 60fps) is insane, and nobody should really accept it, but we commonly do.


Pedantic nit: At 60 fps the per frame time is 16.66... ms, not 30 ms. Having said that a lot of games run at 30 fps, or run different parts of their logic at different frequencies, or do other tricks that mean there isn't exactly one FPS rate that the thing is running at.


The CPU part happens on one frame, the GPU part happens on the next frame. If you want to talk about the total time for a game to render a frame, it needs to count two frames.


If latency of input->visible effect is what you're talking about, then yes, that's a great point!


Computers are fast. Why do you accept a frame of lag? The average game for a PC from the 1980s ran with less lag than that. Super Mario Bros had less than a frame between controller input and character movement on the screen. (Technically, it could be more than a frame, but only if there were enough objects in play that the processor couldn't handle all the physics updates in time and missed the v-blank interval.)


If Vsync is on which was my assumption from my previous comment, then if your computer is fast enough, you might be able to run CPU and GPU work entirely in a single frame if you use Reflex to delay when simulation starts to lower latency, but regardless, you still have a total time budget of 1/30th of a second to do all your combined CPU and GPU work to get to 60fps.


30mS for a website is a tough bar to clear considering Speed of Light (or rather electrons in copper / light in fiber)

https://en.wikipedia.org/wiki/Speed_of_light

Just as an example, round trip delay from where I rent to the local backbone is about 14mS alone, and the average for a webserver is 53mS. Just as a simple echo reply. (I picked it because I'd hoped that was in Redmond or some nearby datacenter, but it looks more likely to be in a cheaper labor area.)

However it's only the bloated ECMAScript (javascript) trash web of today that makes a website take longer than ~1 second to load on a modern PC. Plain old HTML, images on a reasonable diet, and some script elements only for interactive things can scream.

    mtr -bzw microsoft.com
    6. AS7922        be-36131-cs03.seattle.wa.ibone.comcast.net (2001:558:3:942::1)         0.0%    10   12.9  13.9  11.5  18.7   2.6
    7. AS7922        be-2311-pe11.seattle.wa.ibone.comcast.net (2001:558:3:3a::2)           0.0%    10   11.8  13.3  10.6  17.2   2.4
    8. AS7922        2001:559:0:80::101e                                                    0.0%    10   15.2  20.7  10.7  60.0  17.3
    9. AS8075        ae25-0.icr02.mwh01.ntwk.msn.net (2a01:111:2000:2:8000::b9a)            0.0%    10   41.1  23.7  14.8  41.9  10.4
    10. AS8075        be140.ibr03.mwh01.ntwk.msn.net (2603:1060:0:12::f18e)                  0.0%    10   53.1  53.1  50.2  57.4   2.1
    11. AS8075        2603:1060:0:10::f536                                                   0.0%    10   82.1  55.7  50.5  82.1   9.7
    12. AS8075        2603:1060:0:10::f3b1                                                   0.0%    10   54.4  96.6  50.4 147.4  32.5
    13. AS8075        2603:1060:0:10::f51a                                                   0.0%    10   49.7  55.3  49.7  78.4   8.3
    14. AS8075        2a01:111:201:f200::d9d                                                 0.0%    10   52.7  53.2  50.2  58.1   2.7
    15. AS8075        2a01:111:2000:6::4a51                                                  0.0%    10   49.4  51.6  49.4  54.1   1.7
    20. AS8075        2603:1030:b:3::152                                                     0.0%    10   50.7  53.4  49.2  60.7   4.2


In the cloud era this gets a bit better but my last job I removed a single service that was adding 30ms to response time and replaced it with a consul lookup with a watch on it. It wasn’t even a big service. Same DC, very simple graph query with a very small response. You can burn through 30 ms without half trying.


its also about cost. My game computer has 8 cores + 1 expensive gpu + 32GB ram for me alone. We dont have that per customer.


This is again a problem understanding that computers are fast. A toaster can run an old 3D game like Quake at hundreds of FPS. A website primarily displaying text should be way faster. The reasons websites often aren’t have nothing to do with the user’s computer.


That's a dedicated toaster serving only one client. Websites usually aren't backed by bare metal per visitor.


Right. I’m replying to someone talking about their personal computer.


If your websites take less than 16ms to serve, you can serve 60 customers per second with that. So you sorta do have it per customer?


That’s per core assuming the 16ms is CPU bound activity (so 100 cores would serve 100 customers). If it’s I/O you can overlap a lot of customers since a single core could easily keep track of thousands of in flight requests.


With a latency of up to 984ms


Im just saying that we dont have gaming pc specs per customer to chug that 7GB of data for every request in 30ms


It's also about revenue.

Uber could run the complete global rider/driver flow from a single server.

It doesn't, in part because all of those individual trips earn $1 or more each, so it's perfectly acceptable to the business to be more more inefficient and use hundreds of servers for this task.

Similarly, a small website taking 150ms to render the page only matters if the lost productivity costs less than the engineering time to fix it, and even then, only makes sense if that engineering time isn't more productively used to add features or reliability.


Practically, you have to parcel out points of contention to a larger and larger team to stop them from spending 30 hours a week just coordinating for changes to the servers. So the servers divide to follow Conway’s Law, or the company goes bankrupt (why not both?).

Microservices try to fix that. But then you need bin packing so microservices beget kubernetes.


Uber could not run the complete global rider/driver flow from a single server.


I'm saying you can keep track of all the riders and drivers, matchmake, start/progress/complete trips, with a single server, for the entire world.

Billing, serving assets like map tiles, etc. not included.

Some key things to understand:

* The scale of Uber is not that high. A big city surely has < 10,000 drivers simultaneously, probably less than 1,000.

* The driver and rider phones participate in the state keeping. They send updates every 4 seconds, but they only have to be online to start a trip. Both mobiles cache a trip log that gets uploaded when network is available.

* Since driver/rider send updates every 4 seconds, and since you don't need to be online to continue or end a trip, you don't even need an active spare for the server. A hot spare can rebuild the world state in 4 seconds. State for a rider and driver is just a few bytes each for id, position and status.

* Since you'll have the rider and driver trip logs from their phones, you don't necessarily have to log the ride server side either. Its also OK to lose a little data on the server. You can use UDP.

Don't forget that in the olden times, all the taxis in a city like New York were dispatched by humans. All the police in the city were dispatched by humans. You can replace a building of dispatchers with a good server and mobile hardware working together.


You could envision a system that used one server per county and that’s 3k servers. Combine rural counties to get that down to 1000, and that’s probably less servers than uber runs.

What the internet will tell me is that uber has 4500 distinct services, which is more services than there are counties in the US.


I believe the argument was that somebody competent could do it.


The reality is that, no, that is not possible. If a single core can render and return a web page in 16ms, what do you do when you have a million requests/sec?

The reality is most of those requests (now) get mixed in with a firehose of traffic, and could be served much faster than 16ms if that is all that was going on. But it’s never all that is going on.


Lowered expectations are come in part from people giving up on theirs. Accepting versus pushing back.


I have high hopes and expectations, unfortunately my chain of command does not, and is often an immovable force.


This is a terrible time to tell someone to find a movable object in another part of the org or elsewhere. :/

I always liked Shaw’s “The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.”


The unreasonable man also gets scolded and later fired for rocking the boat.

That adage only applies to people with resources and connections, not the average programmer who can't afford to lose a job.


> The biggest tool in the performance toolbox is stubbornness. Without it all the mechanical sympathy in the world will go unexploited.

The sympathy is also needed. Problems aren't found when people don't care, or consider the current performance acceptable.

> There’s about a factor of 3 improvement that can be made to most code after the profiler has given up. That probably means there are better profilers than could be written, but in 20 years of having them I’ve only seen 2 that tried.

It's hard for profilers to identify slowdowns that are due to the architecture. Making the function do less work to get its result feels different from determining that the function's result is unnecessary.


Architecture, cache eviction, memory bandwidth, thermal throttling.

All of which have gotten perhaps an order of magnitude worse in the time since I started on this theory.


And Amdahl’s Law. Perf charts will complain about how much CPU you’re burning in the parallel parts of code and ignore that the bottleneck is down in 8% of the code that can’t be made concurrent.


I meant architecture of the codebase, to be clear. (I'm sure that the increasing complexity of hardware architecture makes it harder to figure out how to write optimal code, but it isn't really degrading the performance of naive attempts, is it?)


The problem Windows had during its time of fame is the developers always had the fastest machines money could buy. That decreased the code-build-test cycle for them, but it also made it difficult for the developers to visualize how their code would run on normal hardware. Add the general lack of empathy inspired by their toxic corporate culture of “we are the best in the world” and its small wonder why windows, 95 and 98 ran more and more dogshit on older hardware.

My first job out of college, I got handed the slowest machine they had. The app was already half done and was dogshit slow even with small data sets. I was embarrassed to think my name would be associated with it. The UI painted so slowly I could watch the individual lines paint on my screen.

My friend and I in college had made homework into a game of seeing who could make their homework assignment run faster or using less memory. Such as calculating the Fibonacci of 100, or 1000. So I just started applying those skills and learning new ones.

For weeks I evaluated improvements to the code by saying “one Mississippi, two Mississippi”. Then how many syllables I got through. Then the stopwatch function on my watch. No profilers, no benchmarking tools, just code review.

And that’s how my first specialization became optimization.


Broadly agree.

I'm curious, what're the profilers you know of that tried to be better? I have a little homebrew game engine with an integrated profiler that I'm always looking for ideas to make more effective.


Clinic.js tried and lost steam. I have a recollection of a profiler called JProfiler that represented space and time as a graph, but also a recollection they went under. And there is a company selling a product of that name that has been around since that time, but doesn’t quite look how I recalled and so I don’t know if I was mistaken about their demise or I’ve swapped product names in my brain. It was 20 years ago which is a long time for mush to happen.

The common element between attempts is new visualizations. And like drawing a projection of an object in a mechanical engineering drawing, there is no one projection that contains the entire description of the problem. You need to present several and let brain synthesize the data missing in each individual projection into an accurate model.


what do you think about speedscope's sandwich view?


More of the same. JetBrains has an equivalent, though it seems to be broken at present. The sandwich keeps dragging you back to the flame graph. Call stack depth has value but width is harder for people to judge and it’s the wrong yardstick for many of the concerns I’ve mentioned in the rest of this thread.

The sandwich view hides invocation count, which is one of the biggest things you need to look at for that remaining 3x.

Also you need to think about budgets. Which is something game designers do and the rest of us ignore. Do I want 10% of overall processing time to be spent accessing reloadable config? Reporting stats? If the answer is no then we need to look at that, even if data retrieval is currently 40% of overall response time and we are trying to get from 2 seconds to 200 ms.

That means config and stats have a budget of 20ms each and you will never hit 200ms if someone doesn’t look at them. So you can pretend like they don’t exist until you get all the other tent poles chopped and then surprise pikachu face when you’ve already painted them into a corner with your other changes.

When we have a lot of shit that all needs to get done, you want to get to transparency, look at the pile and figure out how to do it all effectively. Combine errands and spread the stressful bits out over time. None of the tools and none of the literature supports this exercise, and in fact most of the literature is actively hostile to this exercise. Which is why you should read a certain level of reproval or even contempt in my writing about optimization. It’s very much intended.

Most advice on writing fast code has not materially changed for a time period where the number of calculations we do has increased by 5 orders of magnitude. In every other domain, we re-evaluate our solutions at each order of magnitude. We have marched past ignorant and into insane at this point. We are broken and we have been broken for twenty years.


I would like to know where I can read more in depth about profiling and performance analysis techniques.


Unreasonable effectiveness of looking.


Tcl was my first "general purpose" programming language (after TI-basic and Matlab).

When I started that job I didn't know the difference between Tcl and TCP. I spent a couple months studying Phillip Greenspuns books. It also made me a better engineer because unlike PHP I couldn't just Google how to do basic web server stuff so I had to learn from first principles. That's how I ended up building my first asset minification pipeline that served the "$file.gz" if it existed with content-encoding: gzip.

Nearly 20 years later and I'm basically a http specialist (well, CDN/Ingress/mesh/proxy/web performance).

Tcl is still kind of neat in a hacky way (no other language I've run across regularly uses upvars so creatively).

Shout-out to ad_proc and aolserver.


AOLServer was the inspiration to the product I worked on, during my first experience working at a dotcom startup.

We had something similar, however it would plug into Apache and IIS, more configurable across several UNIXes and RDMS, and eventually even got an IDE coded in VB, for those folks not wanting to use the Emacs based tooling.

Eventually we also became a victim of the dotcom burst, however many of those ideas were the genesis of OutSytems platform, then rebooted on top of .NET, and still going strong on the market nowadays.


The closeness of this syntax to graphviz dot is very interesting.

having dgsh output a graphvis file in dry-run mode would be a neat feature.


Fundamentally it's a programming language so all the normal ways of running it apply:

Use their library in your application to evaluate policies.

Run it from the cli.

Embed it in some service like nginx.

The language itself is pretty focused on some prolog-ish describing of what constitutes an allow/deny decision.


You are right but it's confusing because there are two different approaches. I guess you could say both approaches improve performance by eliminating context switches and system calls.

1. Kernel bypass combined with DMA and techniques like dedicating a CPU to packet processing improve performance.

2. What I think of as "removing userspace from the data plane" improves performance for things like sendfile and ktls.

To your point, Quic in the kernel seems to not have either advantage.


So... RDMA?


No, the first technique describes the basic way they already operate, DMA, but giving access to userspace directly because it's a zerocopy buffer. This is handled by the OS.

RDMA is directly from bus-to-bus, bypassing all the software.


I had this for a reverse proxy I developed that did some transformations. At about two critical points if there was an error there was literally nothing we could do except 500 barf.


I've wondered about PWM flicker when I started trying to figure out why so many modern car headlights seem like they are strobing to me.

Initially I thought it might be related to the alternator.

I still don't know why I perceive these headlights as having an annoying flicker or why. I'd love it if some (informed) commenter could clear it up for me. Am I imagining it?


Car headlights seem to really cheapen out on the PWN flicker. Even the 2 euro LEDs I buy at the discount store seem less flickery than the lights of some luxury cars. I thought it could be that people are buying the cheapest replacement bulbs they can get their hands on, but then I saw the same thing happening on a new BMW.

I also believe some people are just more affected by flicker than others. Some get headaches or migraines from working under PWM light, others don't even notice.

I'm not a mechanic, but I believe these car lights are capable of achieving some pretty high brightness (necessary for fog lights etc.) but are dimmed under normal conditions, leading to PWM effects you also see in cheap dimmable bulbs. It's especially noticeable for me on those "fancy" lights that try to evade blinding other cars (and end up blinding bikes and pedestrians) and those animating blinker/brake light setups.


You don't do a HTTPS handshake by hand. That's what openssl s_client is for.

https://docs.openssl.org/1.0.2/man1/s_client/

Or maybe socat, I don't use it but I'm pretty sure I've seen people use it.


You are proving my point that telnet is not useful in modern times.


Only if you're a web economy weenie and think HTTP[S] is the measure of utility of most TCP interactions. ;)

I can't remember the last time I used telnet to test whether a web server was live. I don't think web servers figure very prominently in the work I do, though not zero, for sure. However, I doubt I'm the only one in that boat.


Okay let's say you want to ping a Minecraft server and see how many users are online. That's not possible either with telnet.

The concept of protocols made up of printable characters delimited by new lines is antiquated.


But that's not what I want to do. I just want to know if the listening socket is bound.


Alright, let's say you try doing that but it fails for some reason. Where did the failure occur? Were you able to open a TCP connection but you received garbage data that your minecraft-ping command didn't understand? Were you able to open a TCP connection but you received no reply to your ping? Did you fail to open a TCP connection (no SYNACK in response to your SYN)?

All of those problems have different root causes and therefore have different solutions. Telnet helps you figure out where in the stack the failure is occurring.


To me traces (or maybe more specifically spans) are essentially a structured log with a unique ID and a reference to a parent ID.

Very open to have someone explain why I'm wrong or why they should be handled separately.


Traces have a very specific data model, and corresponding limitations, which don't really accommodate log events/messages of arbitrary size. The access model for traces is also fundamentally different vs. that of logs.


There are practical limitations mostly with backend analysis tools. OTel does not define a limit on how large a span is. It’s quite common in LLM Observability to capture full prompts and LLM responses as attributes on spans, for example.


> There are practical limitations mostly with backend analysis tools

Not just end-of-line analysis tools, but also initiating SDKs, and system agents, and intermediate middle-boxes -- really anything that needs to parse OTel.

Spec > SDK > Trace > Span limits: https://opentelemetry.io/docs/specs/otel/trace/sdk/#span-lim...

Spec > Common > Attribute limits: https://opentelemetry.io/docs/specs/otel/common/#attribute-l...

I know the spec says the default AttributeValueLengthLimit = infinity, but...

> It’s quite common in LLM Observability to capture full prompts and LLM responses as attributes on spans, for example.

...I'd love to learn about any OTel-compatible pipeline/system that supports attribute values of arbitrary size! because I've personally not seen anything that lets you get bigger than O(1MB).


Well yeah, there are practical limits imposed by the fact that these have to run on real systems. But in practice, you find that you're limited by your backend observability system because it was designed for a world of many events with narrow data, not fewer events with wider data (so-called "wide events").

OTel and the standard toolkit you get with it doesn't prevent you from doing wide events.


"Wide events" describe a structure/schema for incoming data on the "write path" to a system. That's fine. But that data always needs to be transformed, specialized, for use-case specific "read paths" offered by that same system, in order to be efficient. You can "do wide events" on ingest but you always need to transform them to specific (narrow? idk) events/metrics/summarizations/etc. for the read paths, that's the whole challenge of the space.


You…don’t? This is why tools like ClickHouse and Honeycomb are starting to grow, you just aggregate what you need at query time, and the cost to query is not usually too expensive. The tradeoff is each event has a higher per-unit cost, but this is often the more favorable tradeoff.


> you just aggregate what you need at query time, and the cost to query is not usually too expensive

The entire challenge of observability systems is rooted in the fact that the volume of input data (wide events) on the write path, is astronomically larger than what can ever be directly evaluated by any user-facing system on the read path. Data transformation and specialization and etc. is the whole ball-game. If you can build something directly on top of raw wide-events, and it works for you, that's cool, but it means that you're operating at trivial and non-representative scale.


It does not.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: