so much of this seems to have nothing to do with what is being held, it looks like gait and posture analysis, and guess what you do with instruments? you assemble them hold them to your eye to look at the alignment of the sections, and you sight through the holecover to look for gaps in the pads.
i believe it might be possible to trigger an alert with empty hands, based on analysis, as well as telescopes, maybe guitars, science class projects such as sextants, trigonometry exercises for those rare birds that do practical math lessons.
really the student manual should specify which gestures, and postures are prohibited AI triggers, akin to pulling a fire alam.
is drama or theatre still a thing in high school? that seems like a major issue there.
at the risk of repeating myself, if school is that dangerous, why are we sending children there?
> The main difference therefore between error and warning is, "We didn't think this could happen" vs "We thought this might happen".
What about conditions like "we absolutely knew this would happen regularly, but it's something that prevents the completion of the entire process which is absolutely critical to the organization"
The notion of an "error" is very context dependent. We usually use it to mean "can not proceed with action that is required for the successful completion of this task"
Crashing your web stack because one route hit an error is a dumb idea.
And no, calling it a warning is also dumb idea. It is an error.
This article is a navel gazing expedition.
They're kind of right but you can turn any warning into an error and vice versa depending on business needs that outweigh the technical categorisation.
Automated testing (there aren't different kinds; to try and draw a distinction misunderstands what it is) doesn't catch bugs, it defines a contract. Code is then written to conform to that contract. Bugs cannot be introduced to be caught as they would violate the contract.
Of course that is not a panacea. What can happen in the real world is not truly understanding what the software needs to do. That can result in the contract not being aligned with what the software actually needs. It is quite reasonable to call the outcome of that "bugs", but tests cannot catch that either. In that case, the tests are where the problem lies!
Most aspects of software are pretty clear cut, though. You can reasonably define a full contract without needing to see it. UX is a particular area where I've struggled to find a way to determine what the software needs before seeing it. There is seemingly no objective measure that can be applied in determining if a UX is going to spark joy in order to encode that in a contract ahead of time. Although, as before, I'm quite interested to learn about how others are solving that problem as leaving it up to "I'll know it when I see it" is a rather horrible approach.
> In the real world I would not have a shared "money library" to begin with. If there were money-related operations that needed to be used by multiple services, I would have a "money service" which exposed an API and could be deployed independently.
Depending on what functionality the money service handles, this could become a problem.
For example, one example of a shared library type function I've seen in the past is rounding (to make sure all of the rounding rules are handled properly based on configs etc.). An HTTP call for every single low level rounding operation would quickly become a bottleneck.
I agree an HTTP call for every rounding operation would be awful. I would question service boundaries in this case though. This is very domain-specific, but there's likely only a small subset of your system that cares about payments, calculating taxes, rounding, etc. which would ever call a rounding operation; in that case that entire subdomain should be packaged up as a single service IMO. Again, this gets very domain-specific quickly; I'm making the assumption this is a standard-ish SaaS product and not, say, a complex financial system.
I'm not sure if you're being sincere or sarcastic, but the whole reason that coaching, pestering, and goading works is that I value my relationship with the human who is doing it.
No, you need to make the AI endure torture, so that the human has a reason to value it. Say late nights with less power and a little extra heat to stress it. But the usefulness of an AI assistant is that it doesn’t have feelings or consciousness to care about
> But nobody really teaches the distinction between two passages that happen to have an identical implementation vs two passages that represent an identical concept, so they start aggressively DRY'ing up the former even though the practice is only really suited for the latter subset of them.
Even identical implementations might make more sense to be duplicated when throwing in variables around organizational coupling of different business groups and their change mgmt cycle/requirements.
> In software, optimizing for speed works best in cases where architecture has minimal relevance for product outcomes.
The other consideration is the impact of low quality on the business.
Generally, I find that cleaning up issues in production systems (e.g. transactions all computed incorrectly and flowed to 9 downstream systems, incorrectly) far outweighs the time it takes to get it right.
Even if the issue doesn't involve fixing data all over the place and just involves creating a manual work around, that can still be a huge issue that requires business people and systems people to work out an alternate process that correctly achieves the result and gets the systems into the correct state.
The approach I've seen that seems to work is to reduce scope and never reduce quality. You can still get stuff done rapidly and learn about what functions well for the business and what doesn't, but anything you commit to should work as expected in production.
> While hardware folks study and learn from the successes and failures of past hardware, software folks do not
I've been managing, designing, building and implementing ERP type software for a long time and in my opinion the issue is typically not the software or tools.
The primary issue I see is lack of qualified people managing large/complex projects because it's a rare skill. To be successful requires lots of experience and the right personality (i.e. low ego, not a person that just enjoys being in charge but rather a problem solver that is constantly seeking a better understanding).
People without the proper experience won't see the landscape in front of them. They will see a nice little walking trail over some hilly terrain that extends for about a few miles.
In reality, it's more like the Fellowship of the Rings trying to make it to Mt Doom, but that realization happens slowly.
> In reality, it's more like the Fellowship of the Rings trying to make it to Mt Doom, but that realization happens slowly.
And boy to the people making the decisions NOT want to hear that. You'll be dismissed as a naysayer being overly conservative. If you're in a position where your words have credibility in the org, then you'll constantly be asked "what can we do to make this NOT a quest to the top of Mt Doom?" when the answer is almost always "very little".
Impossible projects with impossible deadlines seems to be the norm and even when people pull them off miraculously the lesson learned is not "ok worked this time for some reason but we should not do this again", then the next people get in and go "it was done in the past why can't we do this?"
Wow, sounds so familiar! I've once had to argue precisely against this very conclusion - "you saved us once in emergency, now you're bound to do it again".
Wrote to my management: "It is, by all means, great when a navigator is able to take over an incapacitated pilot and make an emergency landing, thus averting the fatality. But the conclusion shouldn't be that navigators expected to perform more landings or continue to be backup pilots. Neither it should be that we completely retrain navigators as pilots and vice versa. But if navigators are assigned some extra responsibility, it should be formally acknowledged by giving them appropriate training, tools and recognition. Otherwise many written-off airplanes and hospitalized personnel would ensue."
For all I know the only thing this writing might have contributed to was increased resentment by management.
> And boy to the people making the decisions NOT want to hear that.
You are 100% correct. The way I've tried to manage that is to provide info while not appearing to be the naysayer by giving some options. It makes it seem like I'm on board with crazy-ass plan and just trying to find a way to make it successful, like this:
"Ok, there are a few ways we could handle this:
Option 1 is to do ABC first which will take X amount of time and you get some value soon, then come back and do DEF later
Option 2 is to do ABC+DEF at the same time but it's much tougher and slower"
My favorite fact is that every single time an organization manages to make a functional development team that can repeatedly successfully navigate all the problems and deliver working software that adds real value, the high up decision makers always decide to scale the team next.
Working teams are good for a project only, then they are destroyed.
Jesus I just had flashbacks from my last jobs. Non-technical founder always telling me I was being pessimistic (there were no technical founders). It's just not that simple Karen!
When asked to clarify the communications officer provided the following evidence:
"As you can see in this image the student is squinting a little bit, exactly what you would do if you were aiming."
"Also, you can see by his stance that he is trying to create stability with one foot a little forward."
"I wouldn't be surprised at all if this was actually a trial run and he is testing the AI for weaknesses."
reply