I used to do this at the beginning of a game as a kid, because the earthquakes were pretty harmless without buildings to knock down. You'd just load up on cash, then start building!
Not the poster you replied to, but I've been thinking of it lately in a different way. Functional tests show that a system works, but if a functional test fails, the unit test might show where/why.
Yes, you'll usually get a stack trace when a test fails, but you might still spend a lot of time tracing exactly where the logical problem actually was. If you have unit tests as well, you can see that unit X failed, which is part of function A. Therefore you can fix the problem quicker, at least for some set of cases.
It could be that this sort of thinking contributes to the massive number of JS frameworks out there. You think React is bloated, so you make a "React in 70 lines" ... realize there's some things you missed... fast forward to two years later and you've got another framework.
We used to do a "feely score" of 1-5. It sort of worked to gauge how the team felt (positive or negatively), but it eventually devolved into everyone arguing over whether 3 or 4 was the baseline for an unremarkable day.
We did something like that one place, but a longer survey and on a friggin' 1-10 scale, which tells me whoever set it up didn't know how to measure things or didn't care enough to do it right.
Predictably, 7 ended up equalling "pretty shitty".
[EDIT] in case anyone's wondering what's wrong with that, it's the same thing that can make a top-50% score in IMDB still very bad, plus everyone using different standards for what the numbers mean—with that many available it'll happen even if you try to label them. Scale compression plus no common understanding of what the scores mean. That particular case probably needed no more than four options, maybe even just three.
I live in Albuquerque, and this sort of thing varies greatly around here. There are certainly places that are $35-50 for a hair cut and require appointments, but there are also barber shops and chains where you can walk in and get a haircut right away for $10-15.
I have encountered situations where irregular rounding became solvable but annoyingly problematic to detect / calculate, in the LANL Earthquake dataset on Kaggle, it had a column with samples and a column with (incorrectly incrementing) sample times that were rounded. In order to create a corrected column, I noticed quite a lot of irregularities in the python rounding (or the underlying mechanism)
I also consider simultaneously deterministic, fast, portable rounding or binary formats to be important for decentralized delegation of computation, say a TrueBit style platform:
It's because Python is slow everywhere, so it's hard to find bottlenecks like this. This method is easily 100x slower than need be. If everything else were well written, this would stand out. Since lots of Python is written like this, none stand out.
Back when I worked there (mid-late 90's) the chick-n-strips were actually marinated in the same stuff as the Chargrilled chicken sandwich. Both items are no longer on the menu, and were vastly superior to their offerings today, IMHO.
It doesn't apply until the 1980s, but for luggage as for many, many other things it's worth remembering that many technological (and thus expensive) effort-saving and conveniences make no sense if the work and incovenience can 'simply' be handled by servants, poor/cheap laborers or slaves. Much of technology only makes sense if manpower is expensive.
In Victorian era, if you can afford to travel where you need luggage, then you definitely can afford people to carry that luggage for you, and those people would have been cheaper than high quality bearings that wheeled luggage needs.
It's attributed to Agatha Christie (1890-1976) autobiography that in her younger years she never thought she would ever be wealthy enough to own a car – nor that she'd ever be so poor that she wouldn't have servants; and yet eventually she was in both these conditions at the same time.
I remember implementing this for a class years ago, and then the professor suggested doing the inverse to try to expand the image width. The idea was you would duplicate the lowest energy seam... but all that did was create a lot of repeats of the same seam.
I never did finish that weird idea, but I probably needed to try something like increasing the energy of the chosen seam (and its duplicate)... I may try that again, just because I'm curious what would happen.
> Figure 8: Seam insertion: finding and inserting the optimum seam on an enlarged image will most likely insert the same seam again and again as in (b). Inserting the seams in order of removal (c) achieves the desired 50% enlargement (d). Using two steps of seam insertions of 50% in (f) achieves better results than scaling (e). In (g), a close view of the seams inserted to expand figure 6 is shown.
That's actually done in the paper! You basically choose the lowest energy seam, duplicate it, then blacklist it and duplicate the next lowest energy seam (to prevent repeatedly duplicating the same seam). The results are quite good.
The approach from the original paper is to remove seams like you are decreasing the size, which provides you with a set of exclusive seams that can be added to the original image (with some mapping logic). This produces an output without repeating the same seam repeatedly.