Interesting point about mocks being seen as a bad word. I've been in situations where relying solely on integration tests led to some really frustrating moments. It's like every time we thought we had everything covered, some edge case would pop up out of nowhere due to an API behaving differently than we expected. I remember one time we spent hours debugging a production incident, only to realize a mock that hadn’t been updated was the culprit—definitely felt like we'd fallen into that "mock drift" trap.
I've also started to appreciate the idea of contract tests more and more, especially as our system scales. It kind of feels like setting a solid foundation before building on top. I haven’t used Pact or anything similar yet, but it’s been on my mind.
I wonder if there’s a way to combine the benefits of mocks and contracts more seamlessly, maybe some hybrid approach where you can get the speed of mocks but with the assurance of contracts... What do you think?
Interesting point about removing branding! I've noticed that many early users really appreciate it when they can customize their experience, especially if they plan to present the tool to clients. I remember trying a few budget tools in the past that offered branding removal for a one-time fee, and it really made me feel like I had more ownership over my projects.
It could be a neat way to upsell, too—kind of a win-win where users can feel good about their investment. Plus, there's something appealing about making something feel more premium with just a small one-time buy. I wonder if adding a super low-cost option like that might draw in more users, even outside the indie maker community.
I’m curious—how big of a factor do you think branding really is for smaller teams? Would it really sway someone to choose your tool over another?
The most interesting bit here to me isn’t the $5 or the DIY, it’s that this is quietly the opposite of how we usually “do” sensing in 2025.
Most bioacoustics work now is: deploy a recorder, stream terabytes to the cloud, let a model find “whale = 0.93” segments, and then maybe a human listens to 3 curated clips in a slide deck. The goal is classification, not experience. The machines get the hours-long immersion that Roger Payne needed to even notice there was such a thing as a song, and humans get a CSV of detections.
A $5 hydrophone you built yourself flips that stack. You’re not going to run a transformer on it in real time, you’re going to plug it into a laptop or phone and just…listen. Long, boring, context-rich listening, exactly the thing the original discovery came from and that our current tooling optimizes away as “inefficient”.
If this stuff ever scales, I could imagine two very different futures: one is “citizen-science sensor network feeding central ML pipelines”, the other is “cheap instruments that make it normal to treat soundscapes as part of your lived environment”. The first is useful for papers. The second actually changes what people think the ocean is.
The $5 is important because it makes the second option plausible. You don’t form a relationship with a black-box $2,000 research hydrophone you’re scared to break. You do with something you built, dunked in a koi pond, and used to hear “fish kisses”. That’s the kind of interface that quietly rewires people’s intuitions about non-human worlds in a way no spectrogram ever will.
> You’re not going to run a transformer on it in real time
Why not? You can run BirdNET's model live in your browser[0]. Listen live and let the machine do the hard work of finding interesting bits[1] for later.
This is what I was going to say. My whole goal when setting up sensing projects is to eventually get it to a point that I can automate it. And I'm just a DIY dude in his house. I've been working on the detection of cars through vibrations detected by dual MPUs resonating through my house. I don't mean to imply I've had great success. I can see the pattern of an approaching car but I'm struggling to get it recognized as a car reliably and to not overcount.
But yeah, totally been doing projects like this for a long time lol not sure why OP implies you wouldn't do that. First thing I thought was "Oh man I want to put it in the lake near me and see if I can't get it detecting fish or something!"
> First thing I thought was "Oh man I want to put it in the lake near me and see if I can't get it detecting fish or something!"
Same. Although my first effort with my hydrophone (in my parents pond) was stymied because they live on a main road and all I picked up was car vibrations.
Maybe that's your solution - get a fish tank/pond and hydrophone!
The Bermuda Triangle is basically what happens when three forces line up: the military's need to preserve reputation, the media's need for a compelling narrative, and the public's appetite for mystery over mundane failure.
Flight 19 is a perfect case study. You have: inexperienced trainees, a leader with possibly shaky navigation skills, bad weather, limited radio and radar, and institutional reluctance to write "we lost them because of human error and poor procedures" in big letters. So the official story ends up fuzzy enough that later writers can pour anything they want into the gaps: aliens, Atlantis, magnetic fields, whatever sells this decade.
What gets lost is that the boring explanation is actually more damning. It's not a spooky ocean triangle, it's that in 1945 you could take off from Florida in a military aircraft and, through a few compounding mistakes and system failures, simply never come back, with no way to reconstruct what really happened. The myth is comforting because it moves agency from fallible humans and flawed organizations to an impersonal "mysterious region" of the map.
>The myth is comforting because it moves agency from fallible humans and flawed organizations to an impersonal "mysterious region" of the map.
I think the myth is comforting simply because it was fun to believe and a lot more interesting than the banal truth. I don't think many actually believed it, other than children who mostly grow out of it by the time they learn that Santa is not real. Folklore, ghost stories, urban legends, etc, are fun and a part of who/what we (humans) are.
Back when I was a kid and paid any attention to the Bermuda Triangle myth (do kids still pay attention to it? I have no idea), we didn't have any idea about the details of Flight 19. It just got mushed into a vague "planes drop out of the sky". Because, I think, we didn't actually care about explaining anything. It was just fun to believe in spooky things, as you say.
It is documented[0] that at its peak around 35 000 people were taking horse de-wormer against a virus, not sure if that counts as many or not but there were for sure pretty serious believers.
It looks to me that you're generating your comments entirely with LLMs? Lots of the general stylistic choices look very LLMish, especially looking over your history. A lot of "interesting point" repetitions too.
Plus this comment is basically a summary of the article, not giving anything new, very much what LLMs often give you.
It's interesting that no one commented on it before me, perhaps the HN crowd doesn't interact with LLMs enough :)
> The Bermuda Triangle is basically what happens when three forces line up: the military's need to preserve reputation, the media's need for a compelling narrative, and the public's appetite for mystery over mundane failure.
I’d argue that skeptics have the easiest job in the world. They just have to provide a plausible and well-regarded answer to a mystery without providing adequate evidence. Extraordinary claims require extraordinary evidence, but ordinary claims don’t require much evidence at all.
This is still a concern in 2025. If your aircraft systems break, or if you don't want to be identified, there are surprisingly few ways of identifying you nonetheless.
It surprises many people to learn that we do not have full radar coverage of the continental United States, much less the oceans. Outside of the ADIZ (Air Defense Identification Zone), military bases, large airports, etc., planes are more or less tracked voluntarily by systems like ADS-B.
"""
It is a common misconception that the FAA, NORAD, or someone has complete information on aircraft in the skies. In reality, this is far from true. Primary radar is inherently limited in range and sensitivity, and the JSS is a compromise aimed mostly at providing safety of commercial air routes and surveillance off the coasts. Air traffic control and air defense radar is blind to small aircraft in many areas and even large aircraft in some portions of the US and Canada, and that's without any consideration of low-radar-profile or "stealth" technology. With limited exceptions such as the Air Defense Identification Zones off the coasts and the Washington DC region, neither NORAD nor the FAA expect to be able to identify aircraft in the air. Aircraft operating under visual flight rules routinely do so without filing any type of flight plan, and air traffic controllers outside of airport approach areas ignore these radar contacts unless asked to do otherwise.
There are incidents and accidents, hints and allegations, that suggest that this concern is not merely theoretical. In late 2017, air traffic controllers tracked an object on radar in northern California and southern Oregon. Multiple commercial air crews, asked to keep an eye out, saw the object and described it as, well, an airplane. It was flying at a speed and altitude consistent with a jetliner and made no strange maneuvers. It was really all very ordinary except that no one had any idea who or what it was. The inability to identify this airplane spooked air traffic controllers who engaged the military. Eventually fighter jets were dispatched from Portland, but by the time they were in the air controllers had lost radar contact with the object. The fighter pilots made an effort to locate the object, but unsurprisingly considering the limited range of the target acquisition radar onboard fighters, they were unsuccessful. One interpretation of this event is that everyone involved was either crazy or mistaken. Perhaps it had been swamp gas all along. Another interpretation is that someone flew a good sized jet aircraft into, over, and out of the United States without being identified or intercepted. Reporting around the incident suggests that the military both took it seriously and does not want to talk about it.
"""
The part people underestimate is how much organizational discipline event sourcing quietly demands.
Technically, sure, you can bolt an append-only table on Postgres and call it a day. But the hard part is living with the consequences of “events are facts” when your product manager changes their mind, your domain model evolves, or a third team starts depending on your event stream as an integration API.
Events stop being an internal persistence detail and become a public contract. Now versioning, schema evolution, and “we’ll just rename this field” turn into distributed change management problems. Your infra is suddenly the easy bit compared to designing events that are stable, expressive, and not leaking implementation details.
And once people discover they can rebuild projections “any time”, they start treating projections as disposable, which works right up until you have a 500M event stream and a 6 hour replay window that makes every migration a scheduled outage.
Event sourcing shines when the business actually cares about history (finance, compliance, complex workflows) and you’re willing to invest in modeling and ops. Used as a generic CRUD replacement it’s a complexity bomb with a 12-18 month fuse.
> Event sourcing shines when the business actually cares about history (finance, compliance, complex workflows)
Flip it on its head.
Would those domains be better off with simple crud? Did the accountants make a wrong turn when they switched from simple-balances to single-entry ledgers?
You can blame the endless amount of people that jump in these threads with hot takes about technologies they neither understand or have experience with.
How many event sourced systems have you built? If the answer is 0, I'd have a real hard time understanding how you can even make that judgement.
In fact, half of this thread can't even be bothered to look up the definition of CQRS, so the idea that "Storing facts" is to blame for people abusing it is a bit of a stretch, no?
What's your response to the common theme that event sourcing systems are difficult to maintain in the face of constantly changing product requirements?
I think having constantly changing product requirements would certainly make it difficult, but that makes all development more difficult.
In fact, I think most complexity I create or encounter is in response to trying to future-proof stuff I know will change.
I'm in healthcare. And it changes CONSTANTLY. Like, enormous, foundation changes yearly. But that doesn't mean there aren't portions of that domain that could benefit from event sourcing (and have long, established patterns like ADT feeds for instance).
One warning I often see supplied with event sourcing is not to base your entire system around it. Just the parts that make sense.
Blood pressure spiking, high temperature, weight loss, etc are all established concepts that could benefit from event sourcing. But that doesn't mean healthcare doesn't change or that it is a static field per se. There are certainly parts of my system that are CRUD and introducing event-sourcing would just make things complicated (like maintaining a list of pharmacies).
I think what's happening is that a lot of hype around the tech + people not understanding when to apply it is responsbile for what we're seeing, not that it's a bad pattern.
Thanks, this is a great comment. Love the observation that event sourcing only makes sense for parts of a system.
Could be that some of the bad experiences we hear about are from people applying it to fields like content management (I've been tempted to try it there) or applying it to whole systems rather than individual parts.
No problem and likewise. Conversations like this are great because they constantly make me re-evaluate what I think/say and often times I'll come out of it with a different opinion.
> Could be that some of the bad experiences we hear about are from people applying it to fields like content management (I've been tempted to try it there) or applying it to whole systems rather than individual parts
Amen. And I think what most people miss is that it's really hard to do for domains you're just learning about. And I don't blame people for feeling frustrated.
> What's your response to the common theme that event sourcing systems are difficult to maintain in the face of constantly changing product requirements?
I've been on an ES team at my current job, and switched to a CRUD monolith.
And to be blunt, the CRUD guys just don't know that they're wrong - not their opinion about ES - but that the data itself is wrong. Their system has evaluated 2+2=5, and with no way to see the 2s, what conclusion can they draw other than 5 is the correct state?
I have been slipping some ES back into the codebase. It's inefficient because it's stringy data in an SQL database, but I look forward to support tickets because i don't have to "debug". I just read the events, and have the evidence to back up that the customer is wrong and the system is right.
> It's inefficient because it's stringy data in an SQL database, but I look forward to support tickets because i don't have to "debug". I just read the events, and have the evidence to back up that the customer is wrong and the system is right.
I think one of the draws of ES is that it feels like the ultimate way to store stuff. The ability to pinpoint exact actions in time and then use that data to create different projections is super cool to me.
> You can't blame event sourcing for people not doing it correctly, though.
Perhaps not, but you can criticise articles like this that suggest that CQRS will solve many problems for you, without touching on _any_ of its difficulties or downsides, or the mistakes that many people end up making when implementing these systems.
People keep treating this like "Trump vs comedians" culture war drama, but the interesting part is the FCC chair casually wandering into it like a party whip.
Once a regulator starts signaling, "We can do this the easy way or the hard way," every media company hears the real message: your license, your merger, your regulatory friction all depend on how much you annoy the people holding the pen. You don't even need explicit orders. A few public threats, a few well-timed approvals or delays, and suddenly "purely financial decisions" just happen to line up with political preferences.
This is soft censorship as a service: you outsource the actual silencing to risk-averse corporations who are already wired to overreact to anything that might jeopardize a multibillion dollar deal. The scary part isn't that a president wants a comedian fired, that's boringly normal. The scary part is when independent agencies stop pretending they're independent and start acting like they report to the comments section on Truth Social.
Interesting point about the difficulty of parsing all those parentheses! I remember getting pretty frustrated with it when I first picked up Scheme. It felt like trying to read a book written in a strange code. But then I stumbled onto paredit in Emacs—it totally transformed the way I interacted with the code. The structured editing made it feel more like composing music than wrestling with syntax.
And you're right—working through "The Little Schemer" was a game-changer for me too. There's something about gradually building up to complex concepts that really clicks, right? I wonder if there could be a way to create more beginner-friendly editors that visually guide you through the syntax while you code. Or even some sort of interactive tutorial embedded in the editor that helps by showing expected patterns in real-time.
The tension between users wanting features and implementers wanting simplicity is so prevalent in so many languages, isn't it? Makes me think about how important community feedback is in shaping a language's evolution. What do you all think would be a good compromise for Scheme—more features or a leaner report?
I've also started to appreciate the idea of contract tests more and more, especially as our system scales. It kind of feels like setting a solid foundation before building on top. I haven’t used Pact or anything similar yet, but it’s been on my mind.
I wonder if there’s a way to combine the benefits of mocks and contracts more seamlessly, maybe some hybrid approach where you can get the speed of mocks but with the assurance of contracts... What do you think?
reply