Linux fscrypt[1] [which Android uses for user data] doesn't work like that, you don't need to mount/unmount to decrypt: if the key is evicted from the linux keyring and the page cache is cleared, the user data will not be accessible on locked screen, even by root. It needs the a key in the kernel keyring to decrypt the pages associated with just the encrypted files. It's pretty neat!
Correct, but I'm guessing the applications probably open files at a higher abstraction than a file handle [the curse/gift of Java], so it wouldn't be hard to decouple the file handle, and allow a trigger to close the file handle and sync on logout.
From a purely kernel perspective: it has been some time since I last looked at the kernel fs/dentry code, but from what I remember, the open file handle would hold refs for dentries that comprise the path [all the way up to mount root]. But even that wouldn't prevent other dentries from being cleared anyhow: only the open file would have unencrypted pages in the page cache. I would highly recommend reading the linux fscrypt code if you would like more details: it's very well structured and quite easy to get into!
Of course, the foolproof way would be to check lsof and nuke all processes that still have file handles open before logging out, but that's probably too much heresy :)
I think the GP is pointing out that people tend to overestimate how powerful these techniques are and that using the term AI might have something to do with that.
Not that it isn't impressive: it's just that we are pretty far off from anything resembling what "Artificial Intelligence" may come to embody (ignoring, for the moment, all fuzzy definitions of intelligence).
It's like building a paper airplane and worrying about the aging effects of space travel at light speed: maybe a good thought experiment but not that big a concern right now.
The "bicycle of the mind" platform vs aggregator argument is a bit contrived: the author is conflating features built on top of platforms with the platform itself. Most of the examples listed (Google Photos auto-edit, Maps suggestions, Gmail compose) are typically features built on top of platforms that can be (and are) used without these features.
Instead, I would think of these tools as an electric bicycle of the mind (to borrow from the article): the purpose of the motor is to help you on steeper hills. It is inherently still a bicycle.
Maybe stop trying to shoe-horn multi-decade companies with thousands of employees into ill-fitting philosophical silos? I'm all for arguments for/against companies but retro-fitting them into narratives and then proclaiming A is better than B is disingenuous.
Apple also consistently makes the trade-off of privacy above user-experience; in order to offer similar features to Google photos, they train neural networks on-device.
As well as how seriously they take the biometric data from Touch ID and Face ID, I'm much more comfortable around a HomePod than Alexa.
In the context of Docker Swarm and Kubernetes, autoscaling refers to container level scaling ie. given a set of nodes, any autoscaling function would manage the number of containers that are currently running on these nodes.
For instance/node level autoscaling (which is closer to what you need), I would recommend using the autoscaling features provided by AWS/Google Cloud.
> I would recommend using the autoscaling features provided by AWS/Google Cloud
It would have to be integrated with Kubernetes though -- when we push a new docker container, the container would need to be updated on any new machines created. We'll look into GCP's autoscale solution.
The node level autoscaling doesn't need to be integrated with kubernetes, all it needs to do is create a new instance and register it as a node through normal channels.
Even if you don't need autoscaling, I'd suggest still using autoscaling groups and setting it to a fixed number of instances, so that instances will automatically get restarted if they go down.
Yeah, any new machine instance has to join the Swarm (and its equivalent in kubernetes-speak). But that can be decoupled from kubernetes or docker swarm mode.
As for image management, it would depend on how you would like to propagate new images. With a private docker registry, you could potentially point each new instance to the registry and take care of propagating new images. I favor this approach since it keeps everything separate and easier to manage.
[1] https://www.kernel.org/doc/html/v4.18/filesystems/fscrypt.ht...