Google is much better at this. They always open the map right where I am on my pc (Which does not have gps or WiFi, only internet and I don't allow location in the browser)
On the Google search results page at the bottom there's a city name + "From your IP address" link. Clicking it shows a map with a circled region. It seems to match with what Google maps opens by default.
It's a little less accurate than Cloudflare in my case.
Thanks! I didn't know that. I never use Google search directly anymore, always through SearXNG. So I hadn't noticed. It's indeed about 1km away from my actual location. Not bad. I'm about 500m outside the circle.
You need external monitoring of certificate validity. Your ACME client might not be sending failure notifications properly (like happened to Bazel here). The client could also think everything is OK because it acquired a new cert, meanwhile the certificate isn't installed properly (e.g., not reloading a service so it keeps using the old cert).
I have a simple Python script that runs every day and checks the certificates of multiple sites.
One time this script signaled that a cert was close to expiring even though I saw a newer cert in my browser. It turned out that I had accidentally launched another reverse proxy instance which was stuck on the old cert. Requests were randomly passed to either instance. The script helped me correct this mistake before it caused issues.
100%, I've run into this too. I wrote some minimal scripts in Bash, Python, Ruby, Node.js (JavaScript), Go, and Powershell to send a request and alert if the expiration is less than 14 days from now: https://heyoncall.com/blog/barebone-scripts-to-check-ssl-cer... because anyone who's operating a TLS-secured website (which is... basically anyone with a website) should have at least that level of automated sanity check. We're talking about ~10 lines of Python!
There is a Prometheus plugin called ssl_exporter that will provide the ability for Grafana to display a dashboard of all of your certs and their expirations. But, the trick is that you need to know where all your certs are located. We were using Venafi to do auto discovery but a simple script to basically nmap your network provides the same functionality.
What you're monitoring is "Did my system request a renewed cert?" but what most people's customers care about is instead, "Did our HTTPS endpoint use an in-date certificate?"
For example say you've got an internal test endpoint, two US endpoints and a rest-of-world endpoint, physically located in four places. Maybe your renewal process works with a month left - but the code to replace working certificates in a running instance is bugged. So, maybe Monday that renewal happens, your "CT log monitor" approach is green, but nobody gets new certs.
On Wednesday engineers ship a new test release to the test endpoint, restarting and thus grabbing the renewed cert, for them everything seems great. Then on Friday afternoon a weird glitch happens for some US customers, restarting both US servers seems to fix the glitch and now US customers also see a renewed cert. But a month later the Asian customers complain everything is broken - because their endpoint is still using the old certificate.
> Did our HTTPS endpoint use an in-date certificate?
For any non-trivial organization, you want to know when client certificates expire too.
In my experience, the easiest way is to export anything that remotely looks like a certificate to the monitoring system, and let people exclude the false positives. Of course, that requires you to have a monitoring system in the first place. That is no longer a given.
So, I've worked for both startups and large entities, including both an international corporation and a major university, and in all that time I've worked with exactly one system that used client TLS certificates. They mostly weren't from the Web PKI (and so none of these technologies are relevant, Let's Encrypt for example has announced and maybe even implemented choices to explicitly not issue client certs) and they were handled by a handful of people who I'd say were... not experts.
It's true that you could use client certs with say, Entra ID, and one day I will work somewhere that does that. Or maybe I won't, I'm an old man and "We should use client certs" is an ambition I've heard from management several times but never seen enacted, so the renaming of Azure AD to Entra ID doesn't seem likely to change that.
Once you're not using the Web PKI cert expiry lifetimes are much more purpose specific. It might well make sense for your Entra ID apps to have 10 year certs because eh, if you need to kill a cert you can explicitly do that, it's not a vast global system where only expiry is realistically useful. If you're minting your own ten year certs, now expiry alerting is a very small part of your risk profile.
Client certificates aren't as esoteric as you think. They're not always used for web authentication, but many enterprises use them for WiFi/LAN authentication (EAP-TLS) and securing confidential APIs. Shops that run Kubernetes use mTLS for securing pod to pod traffic, etc. I've also seen them used for VPN authentication.
Huh. I have worked with Kubernetes so I guess it's possible that's a second place with client certs and I never noticed.
The big employers didn't use EAP-TLS with client certs. The University of course has Eduroam (for WiFi), and I guess in principle you could use client certs with Eduroam but that sounds like extra work with few benefits and I've never seen it from either the implementation side or the user side even though I've worked on or observed numerous Eduroam installs.
I checked install advice for my language (it might differ in other languages) and there's no sign that Eduroam thinks client certificates would be a good idea. Server certs are necessary to make this system work, and there's plenty of guidance on how to best obtain and renew these certificates e.g. does the Web PKI make sense for Eduroam or should you just busk it? But nothing about client certificates that I could see.
I can't comment on Eduroam as I have no experience working in the Edu space, but in general, EAP-TLS is considered to be the gold standard for WiFi/LAN authentication, as alternatives like EAP-TTLS and PEAP-MSCHAPv2 are all flawed in one way or another and rely on username/password auth, which is a weaker form of authentication than relying on asymmetric cryptography (mTLS). Passwords can be shared and phished, if you're not properly enforcing server cert validation, you will be susceptible to evil twin attacks, etc.
Of course, implementing EAP-TLS usually requires a robust way for distributing client certificates to the clients. If all your devices are managed, this is often done using the SCEP protocol. The CA can be either AD CS, your NAC solution, or a cloud PKI solution like SecureW2.
Yeah, I don't think EAP-TLS with client certs would work out well for Eduroam applications. You have a very large number of end users, they're only barely under your authority (students, not staff) and they have a wide variety of devices, also not under your control.
But even in Enterprise corporate settings I did not ever see this though I'm sure some people do it. It sounds like potentially a good idea, of course it can have excellent security properties, however one of the major downside IMHO is that people wind up with the weakest link being a poorly secured SCEP endpoint. Bad guys could never hope to break the encryption needed to forge credentials, but they could trivially tail-gate a call center worker and get real credentials which work fine, so, who cares.
Maybe that's actually enough. Threat models where adversaries are willing to physically travel to your location (or activate a local asset) might be out of your league anyway. But it feels to me as if that's the wrong way to look at it.
I am airgapped and the certs are usually wildcard with multiple SANs. You would think that the SANs alone would tell you which host has a cert. But, it can be difficult to find all the hosts or even internal hosts that use TLS.
Kind of cool to have an uptime monitoring tool that also had an option like that, two birds one stone and all that. Not affiliated with them, FOSS project.
The scalable way (up to thousands of certificates) is https://sslboard.com. Give it one apex domain, it will find all your in-use certificates, then set alerts (email or webhook). Fully external monitoring and inventory.
Looks like it relies on certificate transparency logs. That means that it won’t be monitor endpoints using wildcard certs. Best thing it could do would be to alert when a wildcard cert is expiring without a renewed cert having been issued.
Is that enough though? You may have wildcards on domains that are not even on a public DNS and you may forget to replace it "somewhere". For that reason it is better to either dump list of domains from your local DNS or have e.g. zabbix or another agent on every host machine checking that file for you.
That's exactly my point. Is that while this service sounds quite useful for many common cases, it's going to fail in cases where there's not a 1-to-1 certificate-to-server mapping. Even outside of wildcards, you have to account for cases where the cert might be installed on N number of load balancers.
If you're using a cert on multiple IPs, or IPv4+v6, SSLBoard will monitor all IPs. It's not foolproof, but it covers most common practices. btw wildcard certs don't have a good reputation (blast radius)...
I'd say that load balancers (one-address-to-N-servers) count as a common practice, but I otherwise agree in that regard.
Regarding wildcard certs, eh. I wouldn't say they have a bad reputation. Sure, greater blast radius. But sometimes it can certainly simplify things to use one. Your ACME client configuration is easier and your TLS terminator configuration often becomes easier when the terminator would otherwise need to switch based on SNI.
one-address-to-N-servers is perfect if the N servers don't all terminate TLS. If not, it becomes impossible to actually test what certificates are actually served. I've seen this fail before (TLS tests flip/flop between good/bad between checks).
As for wildcard certs, I agree there are use cases where we really need them like dynamic subdomains {customer}.status.com
Can you share how they make ACME client configuration easier?
> Can you share how they make ACME client configuration easier?
It's not a profound difference, but you don't need to add each name to your config. Depending on the team's tooling and processes, that may be inconsequential. But in a setting where config management isn't handled super well, where the TLS terminator is a resource shared by multiple, distinct teams, this is a simplification that can make a difference at the margin.
Think less Cloudflare-scale, and more SMB scale (especially in a Windows shop or recovering Windows shop with a different kind of technical culture than what we might all be implicitly imagining).
I'm working on something that could help: linking sslboard with software that's making issuance and distribution of certs easier, ie. a proper CLM. It's not cloud based for security reasons. In that context, we know your wildcard certs because we issue them, and we could know where they are if we distribute them...
Please get in touch with me (chris@sslboard.com) if you're interested in early access and having a word in the development of the product!
I didn't realize you were behind SSLBoard. I think you should've disclaimed that involvement at the beginning. I see now that it's in your bio, but disclaiming is still on you.
Indeed, SSLBoard is scanning CT logs. You can add/import host names though, to allow monitoring of wildcard certs. Same if you're using ports that are not 443, you have to add these to the list of hostnames that are checked.
It's not as convenient, but it's the best SSLBoard can do...
You can use systemd-run with --shell (or a subset of options enabled by --shell) and -p to specify service properties to run commands interactively in a similar environment as your service.
This can help troubleshoot issues and makes experimenting with systemd options faster.
I think there's been some talk about adding a built-in way for systemd-run to copy settings out of a .service file, but it doesn't exist yet.
I've written Perl/Python scripts to do this for me. They're not really aimed at working with arbitrary services, but it should be possible to adapt to different scenarios.
There are some gotchas I ran into. For example, with RuntimeDirectory: systemd deletes the directory once the process exits, even if there's still another process running with the same RuntimeDirectory value set.
It's also really useful for doing parallel builds of modules that may actually consume all available memory when you can't force the build system to use fewer cores than you have available.
Both in terms of artificially reducing the number of CPUs you expose, but also in terms of enforcing a memory limit that will kill all processes in the build before the broader kernel OOM killer will act, in case you screw up the number of CPUs.
woah that's actually awesome.
I feel like adding uh storage usage limits could also be easy as well.
But the one thing that I always wonder is about (virtualization?) in the sense of something like docker just for containerizing or some sort of way of running them in some sort of sandbox without much performance issues or something, I am kinda interested in knowing what might be the best way of doing so (is podman the right way or some other way like bubblewrap?)
Edit: just discovered in the comment below the (parents parents?)comment that there is systemd isolation too, that sounds very interesting and the first time I personally heard of it hmm
You can achieve similar results with podman and bubblewrap, but podman handles things like networking, resource and image management that bubblewrap doesn't by itself
Bubblewrap really is more for sandboxing "transient" containers and being able to separate specific things from the host (such as libraries), with other applications handling the image management, which makes sense because its primary user is Flatpak and Steam. Once the application inside the container is exited, the sandbox is destroyed, it's job is done.
Podman is a Docker clone, it's for development or persistent containers. It will monitor containers, restart them, can pull image updates, setup networks between them etc.
They both use namespacing and cgroups under the hood, but for different results and purposes.
Your right that systemd has sandboxing too, and it also uses the same features as the kernel. Podman can also export it's services to be managed by systemd.
There's literally so much choice when it comes to making containers on Linux.
> but podman handles things like networking, resource and image management
Btw, you can do all of this with systemd too
> the sandbox is destroyed, it's job is done.
I think most container systems have an ephemeral option. If you're looking at systemd then look at the man pages for either systemd-nspawn or systemd-vmspawn and look under Image Options. More specifically `-x, --ephemeral`. It's a pretty handy option.
> Podman can also export it's services to be managed by systemd.
But in that case, why not just use systemd? ;)
> There's literally so much choice when it comes to making containers on Linux.
Despite my joke above, I actually love this. Having options is great and I think it ends up pushing all of them to be better. The competition is great. I'm hyping systemd up a bit but honestly there's gives and takes with each of the different methods. There's healthy competition right now, but I do think systemd deserves a bit more love than it currently gets.
Yeah, that is a big drawback. But as mentioned elsewhere and by others, there is `importctl`. So you can ship these images as well. Meaning only one person needs to make that image for others to be able to get the same convenience as pulling a docker image.
I'm unsure if someone has made a tool to convert docker images to systemd. If not, that'd be a pretty handy one.
given that podman can also have a (nicer?) transition to docker is a plus as well.
There are a lot of paas nowadays which use docker under the hood. I think I would love seeing a future where a paas actually manages it using systemd.
I think this might be really nice giving an almost standard way of installing software.
I really want to try to create something like dokku or some gui for making systemd management easier but I will see some alternatives currently, thanks for sharing it!
I'm fairly confident that systemd, docker, podman, bubblewrap, unshare, and probably other tools are all wrapping the same kernel features, so I'd expect a certain degree of convergence in what they provide.
I wrote my comment before I saw yours, but you'll probably be interested in it[0].
The best thing about systemd is also the worst thing: it's monolithic. You can containarize applications lightly all the way to having a full fledged VM. You can run as user or root. You can limit system access like CPU, RAM, network, and even the physical hardware. You even have homed which gives you more control over your user environments. There's systemd mounts[1], boot, machines, timers, networks, and more. It's overwhelming.
I think two commands everyone should know if dealing with systemd services is:
- `systemctl edit foo.service` to create an override file which sits on top of the existing service file (so your changes don't disappear when you upgrade)
- `systemd-analyze security foo.service` which will give you a short description of the security options and a score specifying your exposure level.
These really helped me go down the rabbit hole and I think most people should have some basic idea of how to restrict their services. A little goes a long way, so even if you're just adding `PrivateTmp=yes` to a service, you're improving it.
I've replaced all my cron jobs with systemd jobs now and while it is a bit more work up front (just copy paste templates...) there are huge benefits to be had. Way more flexibility in scheduling and you're not limited to such restrictions as your computer being off[3]
[1] I've found mounts really helpful and can really speed up boot times. You can make your drives mount in the background and after any service you want. You can also set timeouts so that they will power down and automount as needed. That can save you a good amount on electricity if you got a storage service. This might also be a good time to remind people that you likely want to add `noatime` to your mount options (even if you use fstab)[2].
Another issue I just ran into is that a colon separated value for ExecSearchPath doesn't work in systemd-run/-p. You have to specify each path as a separate -p argument.
There are some minor annoyances like that, but it's not too hard to work around.
A code signing certificate does not cost $500 a year. The OP links to an offering by Certum which is just $25 a year plus the cost for a reusable smart card.
Personally, I recently acquired a certificate from HARICA which costs $55 a year if you only buy one year at a time.
CAs and certificate consumers (browsers) voted in favour of this change. They didn't do this because they're incompetent but because they think it'll improve security.
It's really not that hard to automate renewals and monitor a system's certificate status from a different system, just in case the automation breaks and for things that require manual renewal steps.
I get that it's harder in large organisations and that not everything can be automated yet, but you still have a year before the certificate lifetime goes down to 200 days, which IMO is pretty conservative.
With a known timeline like this, customers/employees have ammunition to push their vendors/employers to invest into automation and monitoring.
It's actually far worse for smaller sites and organizations than large ones. Entire pricey platforms exist around managing certificates and renewals, and large companies can afford those or develop their own automated solutions.
None of the platforms which I deal with will likely magically support automated renewal in the next year. I will likely spend most of the next year reducing our exposure to PKI.
Smaller organizations dependent on off the shelf software will be killed by this. They'll probably be forced to move things to the waiting arms of the Big Tech cloud providers that voted for this. (Shocker.) And it probably won't help stop the bleeding.
And again, there's no real world security benefit. Nobody in the CA/B has ever discussed real world examples of threats this solves. Just increasingly niche theoretical ones. In a zero cost situation, improving theoretical security is good, but in a situation like this where the cost is real fragility to the Internet ecosystem, decisions like this need to be justified.
Unfortunately the CA/B is essentially unchecked power, no individual corporate member is going to fire their representatives for this, much less is there a way to remove everyone that made this incredibly harmful decision.
This is a group of people who have hammers and think everything is a nail, and unfortunately, that includes a lot of ceramic and glass.
I think most orgs can get away with free ACME clients and free/cheap monitoring options.
This will be painful for people in the short term, but in the long term I believe it will make things more automated, more secure, and less fragile.
Browsers are the ones pushing for this change. They wouldn't do it if they thought it would cause people to see more expired certificate warnings.
> Unfortunately the CA/B is essentially unchecked power, no individual corporate member is going to fire their representatives for this, much less is there a way to remove everyone that made this incredibly harmful decision.
Representatives are not voting against the wishes/instructions of their employer.
I mean to give you an example of how far we are from this: IIS does not have built-in ACME support, and in the enterprise world it is basically "most web servers". Sure, you can add some third party thing off the Internet to do it, but... how many banks will trust that?
Unfortunately the problem is likely too removed from understanding for employers to care. Google and Microsoft do not realize how damaging the CA/B is, and probably take the word of their CA/B representatives that the choices that they are making are necessary and good.
I doubt Satya Nadella even knows what the CA/B is, much less that he pays an employee full-time to directly #### over his entire customer base and that this employee has nearly god-level control over the Internet. I have yet to see an announcement from the CA/B that represented a competent decision that reflected the reality of the security industry and business needs, and yet... nobody can get in trouble for it!
Let's Encrypt lists 10 ACME clients for Windows / IIS.
If an organisation ignores all those options, then I suppose they should keep doing it manually. But at the end of the day, that is a choice.
Maybe they'll reconsider now that the lifetime is going down or implement their own client if they're that scared of third party code.
Yeah, this will inconvenience some of the CA/B participant's customers. They knew that. It'll also make them and everyone else more secure. And that's what won out.
The idea that this change got voted in due to incompetence, malice, or lack of oversight from the companies represented on the CA/B forum is ridiculous to me.
> Let's Encrypt lists 10 ACME clients for Windows / IIS.
How many of those are first-party/vetted by Microsoft? I'm not sure you understand how enterprises or secure environments work, we can't just download whatever app someone found on the Internet that solves the issue.
No idea how many are first-party or vetted by Microsoft. Probably none of them. But I really, really doubt you can only run software that ticks one of those two boxes.
Certify The Web has a 'Microsoft Partner' badge. If that's something your org values, then they seem worth looking into for IIS.
I can find documentation online from Microsoft where they use YARP w/ LettuceEncrypt, Caddy, and cert-manager. Clearly Microsoft is not afraid to tell customers about how to use third party solutions.
Yes, these are not fully endorsed by Microsoft, so it's much harder to get approval for. If an organisation really makes it impossible, then they deserve the consequences of that. They're going to have problems with 397 day certificates as well. That shouldn't hold the rest of the industry back. We'd still be on 5 year certs by that logic.
Stealing a private key or getting a CA to misissue a certificate is hard. Then actually making use of this in a MITM attack is also difficult.
Still, oppressive states or hacked ISPs can perform these attacks on small scales (e.g. individual orgs/households) and go undetected.
For a technology the whole world depends on for secure communication, we shouldn't wait until we detect instances of this happening. Taking action to make these attacks harder, more expensive, and shorter lasting is being forward thinking.
Certificate transparency and Multi-Perspective Issuance Corroboration are examples of innovations without bothering people.
Problem is, the benefits of these improvements are limited if attackers can keep using the stolen keys or misissued certificates for 5 years (plus potentially whatever the DCV reuse limit is).
Next time a DigiNotar, Debian weak keys, or heartbleed -like event happens, we'll be glad that these certs exit the ecosystem sooner rather than later.
I'm sure you have legit reasons to feel strongly about the topic and also that you have substantive points to make, but if you want to make them on HN, please make them thoughtfully. Your argument will be more convincing then, too, so it's in your interests to do so.
The whole industry has been moving in this direction for the last decade
So there is nothing much to say
Except that if you waited the last moment, well you will have to be in a hurry. (non)Actions have consequences :)
I'm glad by this decision because that'll hammer a bit down those resisting, those who but a human do perform yearly renewal. Let's how stupid it can get.
Can you point to a specific security problem this change is actually solving? For example, can we attribute any major security compromises in the last 5 years to TLS certificate lifetime?
Are the security benefits really worth making anything with a valid TLS certificate stop working if it is air-gapped or offline for 48 days?
> CAs and certificate consumers (browsers) voted in favour of this change. They didn't do this because they're incompetent but because they think it'll improve security.
They're not incompetent and they're not "evil", and this change does improve some things. But the companies behind the top level CA ecosystem have their own interests which might not always align with those of end users.
If a CA or subscriber improves their security but had an undetected incident in the past, a hacker today has a 397 day cert and can reuse the domain control validation in the next 397 days, meaning they can MITM traffic for effectively 794 days.
CAs have now implemented MPIC. This may have thwarted some attacks, but those attackers still have valid certificates today and can request a new certificate without any domain control validation being performed in over a year.
New security standards should come into effect much faster. For fixes against attacks we know about today and new ones that are discovered and mitigated in the future.
Sure, but it's even better if everyone else does too, including attackers that mislead CAs into misissuing a cert.
CAs used to be able to use WHOIS for DCV. The fact that this option was taken away from everyone is good. It's the same with this change, and you have plenty of time to prepare for it.
> including attackers that mislead CAs into misissuing a cert.
I thought we had CT for this.
> CAs used to be able to use WHOIS for DCV. The fact that this option was taken away from everyone is good.
Fair.
> It's the same with this change, and you have plenty of time to prepare for it.
Not so sure on this one, I think it's basically a result of a security "purity spiral". Yes, it will achieve better certificate hygiene, but it will also create a lot of security busywork that could be better spent in other parts of the ecosystem that have much worse problems. The decision to make something opt-in mandatory forcibly allocates other people's labour.
CT definitely helps, but not everyone monitors it. This is an area where I still need to improve. But even if you detect a misissued cert, it can not reliably be revoked with OCSP/CRL.
--
The maximum cert lifetime will gradually go down. The CA/B forum could adjust the timeline if big challenges are uncovered.
I doubt they expect this to be necessary. I suspect that companies will discover that automation is already possible for their systems and that new solutions will be developed for most remaining gaps, in part because of this announced timeline.
This will save people time in the long run. It is forced upon you, and that's frustrating, but you do have nearly a year before the first change. It's not going down to 47 days in one go.
I'm not saying that no one will renew certificates manually every month. I do think it'll be rare, and even more rare for there to be a technical reason for it.
"The goal is to minimize risks from outdated certificate data, deprecated cryptographic algorithms, and prolonged exposure to compromised credentials. It also encourages companies and developers to utilize automation to renew and rotate TLS certificates, making it less likely that sites will be running on expired certificates."
I'm not even sure what "outdated certificate data" could be. The browser by default won't negotiate a connection with an expired certificate
> I'm not even sure what "outdated certificate data" could be...
Agree.
> According to the article:
Thanks, I did read that, it's not quite what I meant though. Suppose a security engineer at your company proposes that users should change their passwords every 49 days to "minimise prolonged exposure from compromised credentials" and encourage the uptake of password managers and passkeys.
How to respond to that? It seems a noble endeavour. To prioritise, you would want to know (at least):
a) What are the benefits - not mom & apple pie and the virtues of purity but as brass tacks - e.g: how many account compromises do you believe would be prevented by this change and what is the annual cost of those? How is that trending?
b) What are the cons? What's going to be the impact of this change on our customers? How will this affect our support costs? User retention?
I think I would have a harder time trying to justify the cert lifetime proposal than the "ridiculously frequent password changes" proposal. Sure, it's more hygenic but I can't easily point to any major compromises in the past 5 years that would have been prevented by shorter certificate lifetimes. Whereas I could at least handwave in the direction of users who got "password stuffed" to justify ridiculously frequent password changes.
The analogy breaks down in a bad way when it comes to evaluating the cons. The groups proposing to decrease cert lifetimes bear nearly none of the costs of the proposal, for them it is externalised. They also have little to no interest in use cases that don't involve "big cloud" because those don't make them any money.
"outdated certificate data" would be domains you no longer control. (Example would be a customer no longer points a DNS record at some service provider or domains that have changed ownership).
In the case of OV/EV certificates, it could also include the organisation's legal name, country/locality, registration number, etc.
Forcing people to change passwords increases the likelihood that they pick simpler, algorithmic password so they can remember them more easily, reducing security. That's not an issue with certificates/private keys.
Shorter lifetimes on certs is a net benefit. 47 days seems like a reasonable balance between not having bad certs stick around for too long and having enough time to fix issues when you detect that automatic renewal fails.
The fact that it encourages people to prioritise implementing automated renewals is also a good thing, but I understand that it's frustrating for those with bad software/hardware vendors.
> They didn't do this because they're incompetent but because they think it'll improve security.
No, they did it because it reduces their legal exposure. Nothing more, nothing less.
The goal is to reduce the rotation time low enough that the certificates will rotate before legal procedures to stop them from rotating them can kick in.
Apple introduced this proposal. Why would they care about a CA's legal exposure?
Lower the lifetime of certs does mean that orgs will be better prepared to replace bad certs when they occur. That's a good thing.
More organisations will now take the time to configure ACME clients instead of trying to convince CA's that they're too special to have their certs revoked, or even start embarrassing court cases, which has only happened once as far as I know.
Theories that involve CAs, Google, Microsoft, Apple, and Mozilla having ulterior motives and not considering potential downsides of this change are silly.
NGINX detects attempts to use http for server blocks configured to handle https traffic and returns an unencrypted http error: "400 The plain HTTP request was sent to HTTPS port".
Doing anything other than disconnecting or returning an error seems like a bad idea though.
>By tracking the latest stable and selectively integrating changes we've been able to build libjpegturbo easily on a handful of unsupported architecture/compiler combos of the main.
>
>We do not hate Cmake or think it's worse, just autoconf is mature and capable and 99% of the time works perfectly fine. We will maintain it as needed.
/shrug. Doesn't look like a hard fork and it's good for the folks that need it.
I don't know what issues there are with CMake and older compilers, but I trust that they are real enough to warrant this effort.
reply