Hacker Newsnew | past | comments | ask | show | jobs | submit | btreecat's commentslogin

When the only real law is "might is right" we have a fundamental issue with upholding any sort of justice.

Lately I've been using my desktop keyring/wallet to store the secrets encrypted at rest. Then on login they get injected to my shell directly from the secure storage (unlocked at login).

I feel this is probably better than plain text, but if my machine gets popped while logged on you likely have Access to active browser sessions between MFA flows and could do more damage that way.


This is how we did things with Jenkins and gitlab runners before, idk why folks would do it differently for GHA.

If you can't run the same scripts locally (minus external hosted service/API) then how do you debug them w/o running the whole pipeline?


I assume you're using the currently recommended docker-in-docker method. The legacy Gitlab way is horrible and it makes it basically impossible to run pipelines locally.

Containers all the way down

GitHub introduced all their fancy GHA apps or callables or whatever they're called for specific tasks, and the community went wild. Then people built test and build workflows entirely in GHA instead of independent of it. And added tons of complexity to the point they have a whole build and test application written in GHA YAML.

How does this compare to pudb? That has a nice TUI and drops you into one of several Python REPL you can choose from.

hg sees history as useful metadata, and therefore you shouldn't dress it up artificially.

git allows for folks to be Pinky's out with their commit history for warm an fuzzies.

If you think editing history is a grand idea that should be used regularly (like with rebase) then I already know you likely haven't been responsible for large mature code base. Where you'd rather have every comment, change and scrap of info available to understand what you're trying to maintain because the folks before you are long gone.


There's a huge divide between abusing rebase in horrible ways to modify published history, and using it to clean up a patch series you've been working on.

Oops, I made a mistake two commits ago, I'd really like to get some dumb print statements I added out before I send this off to get merged is perfectly valid, I just did it yesterday. A quick `git commit --fixup` followed by `git rebase -i --autosquash HEAD^3` and I had some dumb debugging code I left in stripped out.

Then, there's other perfectly valid uses of rebase, like a simple `git rebase main` in an active development branch to reparent my commits on the current HEAD instead of having my log messed up with a dozen merge commits as I try to keep the branch both current and ready to merge.

So, yes, I do think editing history is a grand idea that should be used regularly. It lets me make all the stupid "trying this" and "stupid bug" commits I want, without polluting the global history.

Or, are you telling me you've also never ended up working on two separate tasks in a branch, thinking they would be hard to separate into isolated changes, and they ended up being more discrete than you expected so you could submit them as two separate changes with a little help from `git cherry-pick` and `git rebase` too?

Editing history isn't evil. Editing history such that pulls from your repository break? That's a different story entirely.


Editing history let's people hide information, intentionally or not. You are bold to claim you know what future people need information wise better than them. What's it matter if you have an extra commit to remove a file before merge? Perfectly valid, and doesn't hide anything.

Caring more about a "visually pleasing log" when you can care about an information rich log doesn't jive with me. Logs aren't supposed to be "clean"

If I want features in two branches, I make two branches. Cherry pick also is bad for most people, most of the time.


I care about having a commit log that's useful and easy to scan through, it's not about it being "visually pleasing". Having a dozen "oopsie" commits in the log doesn't make my life any easier down the road, all it does is increase noise in the history.

Again, once something hits `main` or a release/maintenance branch then history gets left the hell alone. But there really is no context to be gained from me fixing stupid things like typos, stripping out printf() debug statements, etc. being in the commit logs before a change gets merged.


> Editing history let's people hide information, intentionally or not. You are bold to claim you know what future people need information wise better than them.

You're already deciding what information is important to the future when you decide at which points you commit.

Reductio ad absurdum: why not commit every keystroke, including back spaces? By not including every key stroke, you are hiding information from future people!


When using systems without history editing I simply make an order of magnitude fewer commits. With git I basically commit every few minutes and then edit later, and with not-git I simply don't commit until it's "ready". Either way all of those intermediate states aren't getting published or saved forever.

you might want to check on what you "know" then....

[flagged]


Nice technical argument you've got here.

What's your excuse then?

just so you know, this line

> Where you'd rather have every comment, change and scrap of info available to understand what you're trying to maintain because the folks before you are long gone.

see that's a common story from a legacy way of working, back when everyone wrote perl / php scripts and shoved it all into a repo.

The way that people years from now understand what someone else did is that when that someone else does the thing, it's presented for code review. That is, your patch does not go in at all if nobody else knows how it works. You present each change as a logical series of commits, without lots of noise like "fixed typo" or "oops forgot this test", just the way people present patches on the LKML (this is why Linus "got it" before anyone else did), and then other people review it, which is where it's established, "this change makes sense, I understand why and how you did it, and it has good tests".

When you work on a project that is truly long term, you yourself need these records to understand what you did 10 or 15 years ago. So there's no issue that short term history was modified, this is actually essential, because what you're doing is editing the story of how a change came about and presenting it for review. Having it be a long series of small commits that sometimes reverse each other is not going to help anyone understand a particular feature or change, it's noise.


I'm not sure what XP you're speaking from, I see modern day companies with all faults of orgs past because the tooling isn't saving anyone from the human tendencies.

If your problem with commit history is that folks have too many useless commits and you can't personally be bothered to focus on the meat of the PR, that'sa probablem with the commit author and PR reviewer. Not a fundamental need to prune logs.


That sounds like y'all may have been storing binary blobs and large files with out the right plugins setup.

Just rename root directory of project and double size of your repo Mercurial didn't supported rename, and did delete/add instead, so size of repo grows pretty fast

It's a reverse funnel system

> There generally aren't underfed people in the US. This just simply isn't true. > > The opposite is a far bigger issue.

I'm sorry but what's the basis for this claim?


I'm not the person you asked, but I assume their basis is that the majority of the Adult US Population is overweight or obese.[1]

However, we're conflating the related problems of hunger, food insecurity, and malnutrition. Food insecurity at its most extreme will result in hunger (a lack of any food), but the affordable food that is available in food deserts (and at food banks) is often ultraprocessed and incompletely nutritious, which can lead to obesity.[2]

Largely, Americans don't seem to be affected by "hunger" as defined by the United Nations Food and Agriculture Organization[3], but are very affected by malnutrition and food insecurity (as defined by that same body).

1: https://www.niddk.nih.gov/health-information/health-statisti... 2: https://pmc.ncbi.nlm.nih.gov/articles/PMC9790279/#jhn12994-s... 3: https://en.wikipedia.org/wiki/Hunger#Definition_and_related_...


>The system operated in ignorance of what food banks needed.

Clearly the root of the problem. Straw Manning "central planning" is a perverted way to characterize the failure.


Straw manning? One of the earliest critiques of central planning was its inability to learn and respond to unforeseen complexity in the real world. https://en.wikipedia.org/wiki/Economic_calculation_problem


The solution proposed was to adopt a plan from some other centralized committee.

The committee being common to both solutions likely wasn't the problem given the increased success of the second solution. It was the ability to take into account the difference in resource need and utility. That could have been done by the first group, and would have likely produced a better result.

Central planning doesn't require you to ignore the needs of the people you're planning for.


> That's not true. From the little time I've spent trying to read and write some simple programs in BF, I recall good examples being pretty legible.

Anything in a reasonably familiar type face and size will continue to be legible, however brainfuck is not easily human parsable.

Greatly reducing its ability to be _read and mentally internalized._ With out that, are you really doing software engineering or are you actually a software maintenance person?

A janitor doesn't need to understand how energy generation works if he has to change the light bulb.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: