Tools for profiling, debugging, monitoring, thread analysis, and test coverage analysis can attach to the Java Virtual Machine (JVM) using the 'Tool Interface'
If you've got a running java process on your local machine right now, you can use 'jconsole' to see the stack traces of all threads, inspect various memory statistics, trigger an immediate garbage collection or heap dump, and so on. And of course, if the tool is an instrumenting profiler - it needs the power to modify the running code, to insert its instrumentation. Obviously you need certain permissions on the host to do this - just like attaching gdb to a running process.
This capability is used not just by for profiling, debugging and instrumentation but also by mockito to do its thing.
Java 21 introduced a warning [1] saying this will be disabled in a forthcoming version, unless the process is started with '-XX:+EnableDynamicAgentLoading' - whereas previously it was enabled by default and '-XX:+DisableAttachMechanism' was used to disable it.
The goal of doing this is "platform integrity" - preventing the attachment of debugging tools is useful in applications like DRM.
A JVM agent is able to instrument and modify running JVM applications. Stuff like debugging, hot patching, etc rely on this. You used to be able to tell the JVM to listen on a port where you could connect debuggers (agents) dynamically at runtime, but that was deemed a security issue so now you can only declare specific agents at launch time through command-line flags.
Get the OneTab extension. It'll save and close all those tabs. That way you won't have Firefox crashing during startup once you exceed the number of tabs it can handle (a few thousand).
Tip: the crashing is caused by certain extensions such as OneTab and All Tabs Helper which for some reason seem to cause all the tabs to load, just when restoring a session. Temporarily disable these extensions before restoring, then you can reenable.
Well designed C APIs have a context/userdata parameter on their callbacks, which is registered and stored alongside the function pointer. Unfortunately WNDPROC lacks this parameter.
GWLP_USERDATA should be the best option, though the API for setting it and setting the WNDPROC being separate looks error prone.
The bigger problem is that I would have to automate this for every callback type in the Windows API, and there's no guarantee that all of them follow the same 2 or 3 patterns for passing the context pointer into the callback. This solution works great, even if it is a bit wasteful of the ctx ptr.
That's a mathematical expression, not a C++ expression. And floor here isn't the C++ floor function, it's just describing the usual integer division semantics. The challenge here is that you need 128-bit integers to avoid overflowing.
Ah, you're right. I saw that the expression in the comment and in the code was the same and assumed that the commented bit was valid C++ code. You got me to look again and it's obvious that that isn't the case. I had even gone looking through the codebase to see if std::floor was included, and still missed the incorrect `^`.
I guess in that case as long as the 128-bit type supports constexpr basic math operations that should suffice to replace the hardcoded constants with their source expressions.
The .NET ecosystem has been moving towards a higher number of dependencies since the introduction of .NET Core. Though many of them are still maintained by Microsoft.
The "SDK project model" did a lot to reduce that back down. They did break the BCL up into a lot of smaller packages to make .NET 4.x maintenance/compatibility easier, and if you are still supporting .NET 4.x (and/or .NET Standard), for whatever reason, your dependency list (esp. transitive dependencies) is huge, but if you are targeting .NET 5+ only that list shrinks back down and the BCL doesn't show up in your dependency lists again.
Even some of the Microsoft.* namespaces have properly moved into the BCL SDKs and no longer show up in dependency lists, even though Microsoft.* namespaces originally meant non-BCL first-party.
I think first-party Microsoft packages ought to be a separate category that is more like BCL in terms of risk. The main reason why they split them out is so that they can be versioned separately from .NET proper.
How long does mongodump take on that database? My experience was that incremental filesystem/blockdevice snapshots were the only realistic way of backing up (non sharded) mongodb. In our case EBS snapshots, but I think you can achieve the same using LVM or filesystems like XFS and ZFS.
It takes ~21hrs to dump the entire db (~500gb), but I'm limited by my internet speed (100mbps, seeing 50-100mbps during dump). Interestingly, the throughput is faster than doing a db dump from atlas which used to max around 30mbps
I don't remember the numbers (90% is probably a bit exaggerated) but our savings of going from Atlas to MongoDB Community on EC2 several years ago were big.
In addition to direct costs, Atlas had also expensive limitations. For example we often spin up clone databases from a snapshot which have lower performance and no durability requirements, so a smaller non-replicated server suffices, but Atlas required those to be sized like the replicated high performance production cluster.
Was it? Assuming an M40 cluster consists of 3 m6g.xlarge machines, that's $0.46/hr on-demand compared to Atlas's $1.04/hr for the compute. Savings plans or reserved instances reduce that cost further.
Highly doubt that. MongoDB has 5000 well paid employees and is not a big loss making enterprise. If most of the cost was pass through to AWS, they’d not be able to do that. Their quarterly revenue is $500M+ but also spend $200M in sales and marketing and $180M in R&D. (All based on their filings)
Yes, and my point is that this customer switching to running their own MongoDB instances on EC2 like Atlas does would reduce the bill by less than 50% because the rates that they are charging mean that their cut is less than what AWS is getting from this customer.
reply