Hacker Newsnew | past | comments | ask | show | jobs | submit | dwh452's commentslogin

looking left or right is a rotation not a pivot.


I said "pivot point", as in the center of rotation. All rotations have a pivot point.

https://en.m.wikipedia.org/wiki/Pivot_point


i wonder why planes are designed this way?


I think it is because your lever to control the plane does not go up or down but forward and back. and then you pitch the lever the same way you want to pitch the plane. forward to pitch forward and back to pitch back.

Same reason throttles are pushed forward to go faster and backwards to go slower. Except on bulldozers, which have a deaccelerator for some reason. and game controller shoulder levers for ergonomic reasons.

I think if the lever were mounted up and down they(the wright brothers) probably would have wired it to pitch the plane up and down. I am not sure why it was not mounted up and down, probably a combination of arm strength, ergonomics of movement and simplicity of mechanical design.


I think a big part of it, historically, is that this control scheme provides negative feedback, which may help stabilize the controls.

Think about the inertia of the pilot and their limbs inside the plane, acting on the controls. A sudden acceleration/jerk in the direction of the control signal will bias the operator's body to input the opposite control signal unless they are tensed up and prepared to maintain it in spite of the forces they experience.

If the nose pitches up suddenly, you're likely to push the yoke forward. If it pitches down suddenly, you're likely to pull back a bit. Similarly, if the plane (or boat) jerks forward, you are more likely to pull back on the throttle than push it forward. A sudden airplane roll will bias you to input the opposite aileron signal.

Even in a car, if you are holding the top half of the wheel as in the classic 10-and-2 grip, a sudden turn will cause you to counter steer a bit as you experience the centripetal force effect pulling you towards the outside of the turn.

If the controls were inverted, all these default inputs would instead cause positive feedback and seem more likely to send a vehicle out of control.


> I think a big part of it, historically, is that this control scheme provides negative feedback

> Think about the inertia of the pilot and their limbs inside the plane, acting on the controls. A sudden acceleration/jerk in the direction of the control signal will bias the operator's body to input the opposite control signal unless they are tensed up and prepared to maintain it in spite of the forces they experience.

That is completely backwards, sorry.

If the nose pitches up suddenly, the pilot tends to fall backwards, downhill. If the pilot holds the yoke like a handle, he commands further pitch up, which causes him to fall backwards more... The opposite is also true: a sudden pitch down causes an unrestrained pilot to fall forward onto the controls, commanding further pitch down, and so on.


I'm not talking about some weird steady-state condition, where gravity dominates the situation. I'm talking about sudden rotations and what happens before you even have time to react.

Also, I'm talking about the planes where these control schemes were developed a century ago. In these, center of gravity and the aerodynamic center and the pilot were all relatively close to each other. This is different on some modern airliner where the pilot is perched far out in front of the wings.

In those old planes, a pitch or roll would literally rotate the plane around the pilot who momentarily continues in their original orientation. If you are sitting at your desk with hands on a keyboard and I suddenly pitch your desk up, your hands will push further into the keyboard rather than fall away. This is how the pilot would experience the sudden pitch-up as well.


Nope, sorry, you're wrong. I've spent hundreds of hours flying small airplanes, my earlier comment explained what actually happens in real life.


Hmm, color me confused... I am not a pilot but heard this from pilot acquaintances and thought I understood the physics. I wonder if we're still talking about different time intervals though? I'm talking about the immediate effect in situations like buffeting/turbulence, in milliseconds before the pilot's nervous system can even react.

Think of the pilot and plane being in level flight and the nose suddenly pitches up. In that moment, the seat jerks away from the pilot's back and the instrument panel and windshield jerk towards the pilot. In that moment is where I believe there would be some negative input to the yoke. The plane is pitching up but the pilot is still level.

I understand that, eventually, the pilot will also pitch up, due to pressure from the seat bottom and tension from straps. In the longer time scale where that happens, the pilot's nervous system also allows them to intentionally modify their control inputs.


What's sad is how difficult it is to write software today. In the old days your dad could buy a C64 and cobble together an application. It should be vastly easier to do the same kind of thing with vastly better building blocks today. Why can't some Grandma drag and drop some widgets and have a recipe manager with sharing features amongst her friends and family?


Here's a few disorganized thoughts in good faith

1. Because half her friends and family are on iOS, and that means fighting the App Store. (This is a social problem essentially, in fighting Apple)

2. Because networking is hard. How would you have shared recipes with a computer in the C64 days? Email? BBS? (There are partial technical solutions to this, but they would require people to run something like friend-to-friend overlay networks)

3. Because most stuff happens in web browsers and that means pay-to-play, or vendor lock-in, or using AWS free tier and being a programmer. (Ass, grass, or cash, nobody hosts for free. Friend-to-friend networks may also help with this)

4. Because a recipe manager with sharing is best implemented as just emailing your recipes to your friends and storing them as txt files locally. Anything more complicated is beyond the scope of a Visual Basic-style drag-and-drop WYSIWYG anyway

5. When was drag-and-drop enough? The widgets need code behind them to save and open files, right?

6. You might be kinda onto something, and the longer I write async code I more I think the programming world is ready for another big pruning. Like when structured programming said "No goto, no long jumps, if-else and exceptions is all you need", we might be ready for "A single control thread which never blocks and never awaits, and delegates to worker tasks, is all you need until you are building some C100k shit"


When I was a teenager, I imagined that after a decade or two of working with computers, I would be able to write a computer game over a weekend. Or maybe two weekends. I had a notebook full of ideas and sketches, so that when I am ready, I will make all those amazing games.

I even made a few (quite simple, from my today's perspective) games in Pascal during high school and university. I expected to become much more productive over years of practice.

That didn't happen, for several reasons.

First, my expectations for a good game have increased. I don't want to make the most complicated game possible; I am perfectly okay with 2D raster graphics and simple algorithms. But I expect a good game to have animations, sound effects, at least ten levels that feel different, and an option to save game progress. My old games barely had half of that (some were animated, some had ten or more levels, only one had both of that).

Second, things became more complicated. It is no longer "320 x 200 pixels, 256 colors". Windows are resizable; different screens have different sizes. Programs need to be aware that multiple threads exist. Sometimes there are many possible choices, and I get paralyzed by choosing between them. Programs are expect to have installers; it is no longer enough to have one EXE file, and optionally a few data files together in a ZIP file. It felt like every time I mastered something, a new problem appeared to be solved.

Third, as a teenager I didn't realize how much my everyday work would differ from the kind of work necessary to make a computer game. Some skills are transferable: I am more comfortable with using threads, parsing data files, writing algorithms, the program architecture in general. But many skills are not: if my dream is to make a desktop application, then e.g. all the web frameworks that I have learned over those years are useless for this purpose; and they have cost me a lot of time and effort. So from the perspective of making computer games, as an adult I maybe learn in five years as many relevant things as I have learned as a teenager in one year, when I had lots of free time that I could dedicate to this goal.

Fourth, life gets in the way. There is much less free time, and much more things that I need or want to do during that free time.

So here I am, after a few decades of IT jobs, and (a) I can't really make a complete computer game over a weekend, and (b) it's irrelevant, because until my kids grow up I probably won't get a free weekend anyway. Or rather, even the rare "free" weekend (when the kids are away) is spent on other things that have higher priority.


How hard did you look? WordPress has a few recipe maker plugins if you didn't want to code anything. Just install one and password protect the whole thing, and then teach (and write instructions for) Grandma to use it.

In the age of powerful computers, you can use Hypercard on an emulated Mac, you can use any number of hypercard-clones out there. She can just use Google slides. etc.


I think the main difficulty is deployment. Grandma wants that recipe manager to be available to her family 24x7. How can she deploy it easily for free or very low cost? If there were a modern Hypercard, I think the key to its success would be making deployment extremely simple, reliable, and safe.


NextJS with Neon on Vercel has a capable free tier, and there's enough training data that LLMs are decent at it. If Grandma is that interested in building an app, I'm sure she'd love to spend a few hours with a grandkid to set things up and then being taught how to vibe code (and also how to call said grand kid for help).


There are platforms like Observable and Repl.It that just let you deploy code/data pretty quickly.


> Observable

> Repl.it

Sorry, Grandma has never heard of those fancy-pansy platforms.


How about a very reliable device with an interface that even a child can understand, deployment process that consists of only one action, and no upkeep costs? Like, for example, a notebook... Not everything has to be computerized or software.


> Why can't some Grandma drag and drop some widgets and have a recipe manager with sharing features amongst her friends and family?

Because there's no money in it.


How were pinball machines built without a programmable computer? https://www.youtube.com/watch?v=ue-1JoJQaEg I think the arcade industry was already comfortable dealing with complexity to make the mechanical games. The Rube Goldberg nature of the early video games probably weren't that much of a jump in effort/engineering.


This sounds like the advice to prefer the variable name 'ii' over 'i' because you can easily search for it. I loath such advice because it causes the code to become ugly. Similarly, there are 'YODA Conditions' which make code hard to comprehend which solves an insignificant error that is easily caught with tooling. The problem with advice like these is you will encounter deranged developers that become obsessive about such things and make the code base ugly trying to implement dozens of style rules. Code should look good. Making a piece of text look good for other humans to comprehend I consider to be job #1 or #2 for a good developer.


> 'ii' over 'i'

You don't need to search for local variables, nobody names global variables "i" - so the "ii" advice is pointless.

You often do need to search for places where global stuff is referenced, and while IDEs can help with that - the same things that break grepability often break "find references" in IDE. For example if you dynamically construct function names to call, play with reflections, preproccessor, macros, etc.

So it's a good advice to avoid these things.

> you will encounter deranged developers that become obsessive about such things and make the code base ugly

You can abuse any rule, including

> Code should look good.

and I'd argue the more general a rule is - the more likely it is to be abused. So I prefer specific rules like "don't construct identifiers dynamically" to general "be good" rules.


> The problem with advice like these is you will encounter deranged developers that become obsessive about such things and make the code base ugly trying to implement dozens of style rules

That's more of a "deranged developer" problem than a problem with the guidelines themselves. E.g. I think his `getTableName` example is quite sensible, but also one which some dogmatic engineers would flag and code-golf down to the one-liner.


> prefer the variable name 'ii' over 'i'

Vim users don't have this issue, where you can * a variable name you're looking for and that'll enforce word boundaries.

PHP developers also don't have this issue: the $ before is a pretty sure indicator you're working with a variable named i and not a random i in a word or comment somewhere

Y'all should just use proper tools. No newfangled Rust and Netbeans that the kids love these days, the 80s had it all!

(Note this is all in jest :) I do use vim and php but it's obviously not a reason to use a certain language or editor; I just wondered to myself why I don't have this problem and realised there's two reasons.)


> the advice to prefer the variable name 'ii' over 'i' because you can easily search for it

\bi\b is the easy way to search for i.


Those things only make the codebase "ugly" until you learn how to read it.


> This sounds like the advice to prefer the variable name 'ii' over 'i' because you can easily search for it

I've never heard of that advice. I honestly like algebraic names (singular digits) as long as they're well documented in a comment or aliasing another longer-name.

> there are 'YODA Conditions' which make code hard to comprehend which solves an insignificant error that is easily caught with tooling

Yoda conditions [0] are a useful defensive programming technique and does not reduce readability except to someone new to it. I argue it improves readability, particularly for myself.

As for tooling... it doesn't catch every case for every language.

> I loath such advice because it causes the code to become ugly.

Beauty is in the eye of the beholder. While I appreciate your opinion, I also reject it out of hand for professional developers. Instead of deciding whether code is "ugly" perhaps you should decide whether the code is useful. Feel free to keep your pretty code in your personal projects (and show them off so you can highlight how your style really comes together for that one really cool thing you're doing).

> you will encounter deranged developers that become obsessive about such things

I don't like being called deranged but I am definitely obsessed about eliminated whole classes of bugs just by the coding design and style not allowing them to happen. If safe code is "ugly" to you... well then I consider myself to be a better developer than you. I'd rather have ugly code that's easily testable instead of pretty code that's difficult to test in isolation which most developers end up writing.

> Code should look good. Making a piece of text look good for other humans to comprehend I consider to be job #1 or #2 for a good developer.

It depends on the project. Just remember that what looks good to you isn't what looks good to me. So if it's your personal project, then make it look good! If it's something we're both working on... then expect to defend your stylistic choices with numbers and logic instead of arguments about "pretty".

Then, from the article:

> Flat is better than nested

If I'm searching for something in JSON I'm going to use jq [1] instead of grep. Use the right tools for the right job after all. I definitely prefer much richer structured data instead of a flat list of key-value pairs.

[0] https://en.wikipedia.org/wiki/Yoda_conditions

[1] https://en.wikipedia.org/wiki/Jq_(programming_language)


Here's a kick-starter kit that does what you're asking for: https://www.stgeotronics.com/open-dsky I built one and loved it, I wrote my own Arduino C code to drive the device: https://github.com/kjs452/KennysOpenDSKY


what a great project and thanks for sharing. I wonder how peripherals are handled, especially thrusters


Maybe we need 'programmable visuals' instead of 'visual programming'? Why can't I write a simple one hundred line text file and produce a nice architectural diagram?


Have you seen PlantUML?


I'm curious how the Turing machines can resemble problems which take input? BB(n) is defined as a n-state Turing machine that starts off with an empty tape. Collatz(n) is how many steps are taken before it terminates when starting with input 'n'.

Does this mean a BB(6) machine which resembles Collatz is testing all possible values as part of it program and not part of anything on the tape (since the tape start out empty)?


It's not testing all values, but one particular starting point. In all likelyhood, it will never reach its stopping condition from this starting point, but proving this even for a single value is currently intractable. Compare with the "5x + 1" variant of the Collatz cojecture, where many values are believed (but not proven) to run off to infinity, never to return.


Edit:nvm see thread

For collatz, the empty input machine loops over all natural numbers and halts if it finds one which doesn’t eventually reach 1.

To prove that it never halts, you’d have to prove the collatz conjecture. Otherwise you’d have to find the smallest counter example of the collatz conjecture.


Suppose that there exists a natural number that diverges to infinity under the Collatz map. Then the Collatz conjecture would be false, but your machine would still run forever on that diverging number. As far as I am aware, there is no known machine that halts iff the Collatz conjecture is false.


You are correct, thank you


Very sad for me, he was one of my favorite thinkers and his books were the few that made me feel smarter after having read them. His thinking tools remain a great aid to my thinking. The reason for this post though, is to mention that Darwin also died on April 19th.


The best thing I have done for making my notes usable is to have a simple grep script that runs anywhere and searches all the notes files I have. My script is case insensitive and matches lines that contains all the strings you searched on.


Indeed. For Emacs users, there's consult-notes-search-in-all-notes which does that with file previews.


To that end I recently wrote about several modern ways to convert handwritten notes to searchable notes. I primarily write handwritten notes because of the mental benefits of doing so.

https://notes.joeldare.com/handwritten-text-recognition


Couldn’t agree more. Make your notes discoverable by search would be my number one recommendation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: