Hacker Newsnew | past | comments | ask | show | jobs | submit | _notreallyme_'s commentslogin

Not necessarily startup. You can see some laptops with defcon stickers, it used to be very common for infosec auditors to have work laptops full of stickers not that long ago. Although, it is bad practice for read team audits, and some large companies don't like this kind of shenanigans for internal audits, so that may explain why it is less frequent nowadays

As a native french speaker, I have the same feeling when reading code written with french keywords, except that since I learned boolean and arithmetic in french, it makes more sense to me to read them in french. As others have pointed out, it seems to only be a matter of how you learn to read and write code.

For comparison, in mathematics I learned to read all the symbols in french, and only learned their english equivalent much later, so it feels uneasy for me when i read their english version. So it is clearly a matter of habit that took its root when you learned reading.


I may be wrong, but it gives me some powershell vibe. Since it seems to be targeted for macOS, I would assume it "solves" the lack of powershell equivalent on Mac ?


On Mac and Linux you can use powershell core:

https://learn.microsoft.com/en-us/powershell/scripting/insta...


Powershell 7+ (a long while ago named core) is the version you should use on ALL platforms, including Windows. It's just the most recent version. "Core" gives off a vibe that it is some limited thingy. It's not, it's full PS.


Oh goody


Murex works on a multitude of platforms, including Linux and Windows. But also a variety of UNIXs too.

It was actually first created before Powershell was available outside of Windows. But some of the design philosophies are fundamentally different to Powershell too. For example Murex is designed to work well with POSIX (bar the shell syntax itself), whereas Powershell reimplements most of the stack, including coreutils.


Optimizing code on MMU-less processor versus MMU and even NUMA capable processor is vastly different.

The fact that the author achieves only a 3 to 6 times speedup on a processor running at a frequency 857 faster should have led to the conclusion that old optimizations tricks are awfully slow on modern architecture.

To be fair, execution pipeline optimization still works the same, but not taking into account the different layers of cache, the way the memory management works and even how and when actual RAM is queried will only lead to suboptimal code.


Are we intentionally ignoring that ABAP is byte code interpreted?


Seems like, You've got it backwards — and that makes it so much worse. ^_^

I ported from ABAP to Z80. Modern enterprise SAP system → 1976 processor. The Z80 version is almost as fast as the "enterprise-grade" ABAP original. On my 7MHz ZX Spectrum clone, it's neck-and-neck. On the Agon Light 2, it'll probably win. Think about that: 45-year-old hardware competing with modern SAP infrastructure on computational tasks. This isn't "old tricks don't work on new hardware." This is "new software is so bloated that Paleolithic hardware can keep up." (but even this is nonsense - ABAP is not designed for this task =)

The story has no moral, it is just for fun.


That Z80 code is not the equivalent of the modern code though, is it?

for example your modern code mentions 64KB lookup table.. no way you can port this to Z80 which has 64KB of address space total, shared for input, output, cache and code.

So what do those timings mean? Are those just a made up numbers for the sake of narrative?


Input and output are in a separate address space on the Z80. It's on the 6502 where they share space with code and data.


what do you mean?

Memory and i/o ports are in separate address spaces in Z80, but for use cases described in post ("dot product for 1536-bit vectors") i/o port space does not matter, it's all memory - and there is just a single address space there.

(Granted, some Z80-based systems had funky paging setup, but author makes no mention of those, they just say generic Z80 - and that means total 64KB for code, input data, cache and output data)


Oh, that makes a lot more sense! I was puzzled as to how the new hardware could be so slow, but an inefficient interpreter easily explains it. I've seen over 1000× slowdowns from assembly to bash, so it sounds like ABAP is close to bash.


But if you ported the ABAP to a static language it would be significantly faster than both


Yes. But I am not very much interested in that.

However, ZVDB-GO port is ... about 1mln (?) times faster than ABAP version.


I've had him as a sociology teacher in the early 2000s, specifically on this subject (controversies).

It was apparently his first time in this school, and he was not prepared for the controversy that happened due to his (controversial) stance on the scientific method. He ended up calling us names, and privileged kids (that part was 97% true, but not entirely true...).

It's only after his death that many articles praising him appeared. I guess people capitalize on its notoriety rather than on whatever bullshit he wrote...


> (controversial) stance on the scientific method.

That stance is well-covered here.[1]

Some of the problems in science come from experiments too close to the noise threshold. This is most of social science and psychology. The hard-line position is Rutherford's "If your experiment needs statistics, you ought to have done a better experiment." Related to this is Hoyle's "Science is prediction, not explanation." For phenomena that led to useful engineering, repeatability and predictability are very good. Otherwise the products won't work.

People tend to forget this, because controversial research topics are often close to the noise threshold. It something turns out to be real, and you can get it to happen further from the threshold, it becomes routine engineering. It's then no longer controversial. Your result gets a few lines in the Handbook of Chemistry and Physics. This sort of science makes the world go.

Philip K. Dick's “Reality is that which, when you stop believing in it, doesn’t go away” remains useful.

Taking this hard-line position is useful, because humans are evolved and wired to see patterns near the noise threshold. This is a useful survival strategy for detecting predators in the brush, even with a high false-alarm rate. Once past survival level, it's less useful.

[1] https://www.nytimes.com/2018/10/25/magazine/bruno-latour-pos...


The english version is weird. It was planned from the beggining that they would bury a bronze owl.

The bronze owl was to be exchanged with the precious metal one. In the french news, they specifically mentioned that the bronze one was found.

If you think about it, it makes more sense. The co-founder was given the rights to the original treasure hunt because he is the owner of the valuable owl. He is the one who financed the whole thing.


The French Wikipedia page doesn't talk about it, but the quoted text from the English Wikipedia page involving an iron bird (and not a bronze owl) is accurate. Here's the official report of finding it: https://cdn.shopify.com/s/files/1/0511/4586/7430/files/pv.co... (liked from https://editions-chouettedor.com/pages/documents-officiels).


Oh, thanks for the links.

What it says it that the statue should have been in bronze, but is instead in "ferrous metal" and must have been replaced around september 2005.

Anyway, the idea was that the golden one was not buried, only a "pass-out" one.


Or on official tables according to the International Table Soccer Federation.

From what I've seen during my travels, there are lot of variations for foosball tables. Each countries seem to have their own variations.


Lots of different types. I have even once seen a XXL size in a bar (something like https://www.abbeyroadentertainment.com/rentals-and-services/...).


Official tables can have either style.

There are quite a few tables that are considered tournament grade by the various table soccer associations, including ITSF (I think at least six manufacturers at this point?). In the US, Tornado is the most common tournament table by far and has a 3-man goalie bar, but many European tables like Bonzini or Garlando have the 1-man and raised corners.


> I wonder if the French are considering digging underneath all those obstacles?

The problem is the Seine phreatic zone, which starts usually between 15 to 25m below the surface. Some GRS galleries are actually completely inundated and others have a level of water that varies between the seasons.

In order to have some metro going underneath the Seine river, they had to freeze it first. It is not an easy task, so there must be a real advantage to going under the Seine.


Yes, they explicitly classify wikipedia as a tertiary source [1].

Wikipedia is good for finding secondary source, and then primary sources by following the links.

[1] https://en.wikipedia.org/wiki/Wikipedia:Primary_Secondary_an...


I don't get how expressing these numbers in time unit is useful ?

I've been a developer for embedded systems in the telecom industry for nearly two decades now, and I had never met anyone using something else than "cycles" or "symbols" until today... Except obviously for the mean RTT US<->EU.


> I've been a developer for embedded systems in the telecom industry for nearly two decades now

On big computers, cycles are squishy (HT, multicore, variable clock frequency, so many clock domains) and not what we're dealing with.

If we're making an architectural choice between local storage and the network, we need to be able to make an apples to apples comparison.

I think it's great this resource is out there, because the tradeoffs have changed. "RAM is the new disk", etc.


then, why not just using qualifiers ? from slowest to fastest. You might not know that, but you can develop bare metals solution for HPC that are used in several industries like telecommunication. Calculation based on cycles are totally accurate whether the number of cores...


> then, why not just using qualifiers ? from slowest to fastest.

Because whether something is 5x slower or 5000x slower matters. Is it better to wait for 10 IOs, random access memory 10000x, or do a network transaction? We can figure out the cost of the memory/memory bandwidth, etc, but we also need to consider latency.

I've done plenty of work counting cycles; but it's a lot harder and less meaningful now. Too many of the things here happen in different clock domains. While it was a weekly way to look at problems for me a couple of decades ago, now I employ it for far less: perhaps once a year.

> Calculation based on cycles are totally accurate whether the number of cores...

No, they're not, because cores contend for resources. We contend for resources within a core (hyperthreading, L1 cache). We contend for resources within the package (L2+ cache lines and thermal management). And we contend for memory buses, I/O, and networks. These things can sometimes happen in parallel with other work, and sometimes we have to block for them, and often this is nondeterministic. In turn, the cycle counts for doing anything within the larger system are really nondeterministic.

Counting cycles works great to determine execution time on a small embedded system or a 1980s-1990s computer, or for a trivial single threaded loop running by itself on a 2020s computer. But most of the time now we need to think account for how much of some other scarce resource we're using (cache, memory bandwidth, network bandwidth, a lock, power dissipated in the package, etc), and think about how various kinds of latencies measured in different clock domains compose.


Not to take away from your point, but I'd argue that counting cycles is usually misleading even for small embedded systems now. It's very difficult to build a system where cycles aren't equally squishy these days.


Depends on how small we're looking at.

Things like Cortex-M-- stuff's deterministic. Sure, we might have caches on the high end (M55/88), and contention for resources with DMA, but we can reason about them pretty well.

A few years ago I was generating NTSC overlay video waveforms with SPI from a cortex-M4 while controlling flight dynamics and radio communications on the same processor. RMS Jitter on the important tasks was ~20 nanoseconds-- 3-4 cycles, about a factor of 100x better than the requirement.

But I guess you're right: you could also consider something like a dual-core Cortex-A57 quite small, where all the above complaints are true.


Because it's something very different. I was expecting standalone numbers that would hint to the user something is wonky if they showed up in unexpected places - numbers like 255 or 2147483647.


It gives you a rough understanding of how many you can do in a second.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: