Kafka doesn't guarantee exactly once delivery at all, unless you're using Kafka Streams and even then your final output topic still won't get exactly once, the consumer group protocol doesn't allow for it.
It's cheaper until you get to a sizeable workload, and the P90+ latency is ridiculous.. the kafka api is weak and when you're not using kafka api you're limited on integration tools unless you want to be super locked in to GCP.
The lock-in argument is a non starter with me. I never see people move between clouds (well it happens but it is incredibly rare) and it isn’t because of lock in but rather they are pretty close to equivalent. And if you wanted to go on prem you can replace the messaging system, it isn’t one of the hardest steps of going on prem.
You can just run Pulsar standalone which is super-simple, it is a great start and scales just fine for one machine. Once you outscale one machine, you can refactor Pulsar into a distributed setup. The advantage is that everything is still familiar.
>> Rabbit can do everything Kafka does - and much more - in a more configurable manner.
Sure, if you're doing like 10's of MB/s. RMQ is fast compared to AK if you're not adding durability, persistence, etc. Try to run gigabytes per second through it though, or stretch across regions, or meet RTO when the broker gets overloaded and crashes.. Get your shovel ready! ;)
Kafka itself is dumb but scalable and resilient, it's the client ecosystem that's massive compared to RabbitMQ. Count 10 stream processing, connectivity, ingestion or log harvesting platforms that use RMQ as it's backend, then name 10 languages that have supported libraries for RMQ.. then compare that to Kafka.
Maybe "it's enterprise" means that's what the enterprise standardized on. There are a couple of practical reasons that come to mind on why that's the case - a) it's more resilient and durable than messaging platforms, and b) it is a platform of dumb pipes, so to make it a central data bus managed by platform teams means that they don't have to get into the detail of which queues perform what functions, have what characteristics, etc. Rather the client teams in the various business units can take care of all of their "smarts" the way they want. It also handles log/telemetry ingestion, data platform integration, and interservice comms use cases which is pretty multi-functional. That's the primary reason why Kafka has become such a pervasive and common platform, it's not because it's trendy, in fact most operations teams would rather not even have to operate the kafka platform.