Hacker Newsnew | past | comments | ask | show | jobs | submit | bigjump's commentslogin

Most of the Vue devs I know, also use TypeScript for the same reasons.


Pretty sure you can do this with hooks / flows in Directus.


The flight sim Easter egg baked into Excel 97 got me hooked!

https://youtu.be/-gYb5GUs0dM?si=vzOGscnTURqhdyDe

Warning - this video shows Clippy, the very irritating paperclip!


You're cleared for takeoff: https://rezmason.github.io/excel_97_egg



Thanks. One thing that I don't quite understand is how do new waypoints get added as part of the conversion of a flight plan from ICAO4444 to ADEXP format? Does it do some kind of interpolation?

Also, it appears the error was caused by the algorithm selecting two waypoints with the same identifier as the entry and exit points into UK airspace for this particular flight plan. But it also says non-unique waypoints should be at least 4000nm apart from each other so they can be disambiguated. Since UK airspace isn't that big, shouldn't the algorithm have chosen entry and exit waypoints closer to the borders?

Edit: actually it looks like UK airspace extends a few thousands km to the west of the coastline which makes it more plausible that it covers duplicate waypoints.


I'm just speculating as to what points are actually added but typically the flight plan includes 'airways' in addition to waypoints. Here's an example of a plan for London Gatwick (EGKK) to Edinburgh (EGPH):

``` EGKK/08R LAM1Z LAM L10 BPK UN601 INPIP INPI1E EGPH/24 ```

And to take a section out:

``` BPK UN601 INPIP ```

In this case, BPK and INPIP are two waypoints (one in the south of England and one in the north). UN601 is an airway that connects these two waypoints. The airway represents a ton of other waypoints between BPK and INPIP but that you don't need to specify manually in the flight plan. I suspect the additional waypoints added are the splitting of the airway into the underlying waypoints - but as I say it's speculation :-)


"An FPRSA sub-system has existed in NATS for many years and in 2018 the previous FPRSA subsystem was replaced with new hardware and software manufactured by Frequentis AG, one of the leading global ATC System providers."

"At this point with both the primary and backup FPRSA-R sub-systems having failed safely the FPRSA-R was no longer able to automatically process flight plans. It required restoration to normal service through manual intervention."

How can a primary AND it's backup system fail safely??? Who specified this?

"The actions already undertaken or in progress are as follows: 3) A permanent software change by the manufacturer within the FPRSA-R sub-system which will prevent the critical exception from recurring for any flight plan that triggers the conditions that led to the incident."

Means: now they catch the (Java) exception. Great.


> How can a primary AND it's backup system fail safely??? Who specified this?

All safety critical systems are specified to halt instead of performing undefined behavior, if they encounter something that cannot be processed. An unsafe failure would be entering undefined behaviour. What would you have specified differently, that would be safer?

A backup is primarily there in case of hardware failures or for maintenance. If it behaves differently to the primary then something is wrong. Can you explain how and why you would expect a backup system running identical software to behave differently?


I worked in safety-critical ATC projects in engineering and management positions (systems, quality and compliance engineering) for a decade. ATC systems are supposed to not fail, even under adverse conditions. Where high availability is required for safety reasons, redundant architectures is one of the options. Apparently the "backup system" was conceived for this purpose. According to the report (page 17) the responsible subsystem suffered from a "critical exception [..] that triggers the conditions that led to the incident", which let both the primary and backup system fail, and has now apparently been fixed. So obviously the system was not supposed to fail on receiving wrong or suspicious flight plan data, and it was apparently pure luck that no such data arrived for five years. To claim that the subsystem (consisting of the primary and backup system) "safely failed" indicates significant gaps in safety management (either faulty safety analyses, faulty specifications, or faulty configuration or software). The report suggests that critical omissions occured at several levels.


For me it's important to consider the 'ATC system' as the whole. The system as a whole did not fail - no planes crashed, flights still flew - but it was in a degraded state with lower than usual throughput. One component of the system did fail (the FPRSA subsystem) and it seems reasonable to me that given layers of the system lean towards unavailability rather than trying to continue to operate in unforeseen circumstances.

The purpose of a backup system is not to prevent failure - it's to improve resiliency of the system as a whole across a set of foreseen and unforeseen faults. Backup systems failing to handle any specific fault is an expected and predicted behavior. Thankfully in this case there was a backup system that prevented a complete shutdown (and, thankfully, any accident) - the manual processing of flight plans.


Missing the availability requirements is a failure.

Safety is not only about human lives, but also about health and property (also e.g. critical financial and other losses, or reputational damage). The present incident has obviously caused considerable damage. We can only hope that the rest of the system does not suffer from similar omissions and that it is not pure coincidence that even worse events occur.


Yeah of course, but success/failure is also not binary. There are degrees of failure, including low-consequence availability issues, high-consequence availability issues, loss of operational safety, 'never events' (e.g. significant loss of life). In this case the system suffered the second of those options. It seems reasonable that design choices may prioritise that type of failure over the later ones in the list.

The first part of this argument is semantics - how do we define failure. The second part is IMHO more important - what decisions are taken with regards to the behavior of subsystems and how they influence overall system degredation. In this case the overall design prevented any loss of operational safety which, to me, is a success.


One can talk things up or fix them. As the report (and some comments) suggests, the former is given high priority.


The simple pranks are the best.

We just used to add an alias for 'ls' which introduced a subtle, but ever increasing delay each time it was run.


Debian also comes with the package 'sl' that can be amusing. At first at least.


I seldomly mistype ls as sl but it always makes me smile when it happens. It also doesn't bothers me because you can quickly circumvent it by sending sl to background with ctrl+z and deal with it later...

But now that I think about it I wonder if you can prevent the job manager from back-grounding a task, would be quite the addition to sl heheh.


You can handle the TSTP signal to stop that.

However, it still leaves ABRT on the table that can be with ctrl-4 and ctrl-\. For that you'd need to disable the binding e.g. with stty and then handle TSTOP the same way I suppose—or just put it in raw mode.


Problem solved - self censoring font.

https://vole.wtf/scunthorpe-sans/


https://vole.wtf/ is a gem of a website.


But, less hard than dragging a CD/DVD to the trash can to eject it.


Even less hard than writing machine code in binary.


This is what https://www.get-protocol.io/ are doing.

“ Tickets become tradable digital collectibles (NFTs), with a variety of awesome possibilities for fans & event organizers.”


They have just released an update which appears to resolve this issue.


“As a workaround, you can temporarily disable web protection …”


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: