I have had HTTPS-by-default for years and I can say that we're past the point where there's noticeable year-to-year change for which sites aren't HTTPS. It's almost always old stuff that pre-dates Let's Encrypt (and presumably just nobody ever added HTTPS). The news site which stopped updating in 2007, the blog somebody last posted to in 2011, that sort of thing.
I think it's important to emphasise that although Tim's toy hypermedia system (the "World Wide Web") didn't come with baked in security, ordinary users have never really understood that. It seems to them as though http://foo.example/ must be guaranteed to be foo.example, just making that true by upgrading to HTTPS is way easier than somehow teaching billions of people that it wasn't true and then what they ought to do about that.
I am reminded of the UK's APP scams. "Authorized Push Payment" was a situation where ordinary people think they're paying say, "Big Law Firm" but actually a scammer persuaded them to give money to an account they control because historically the UK's payment systems didn't care about names, so to it a payment to "Big Law Firm" acct #123456789 is the same as a payment to "Jane Smith" acct #123456789 even though you'd never get a bank to open you an account in the name of "Big Law Firm" without documents the scammer doesn't have. To fix this, today's UK payment systems treat the name as a required match not merely for your records, so when you say "Big Law Firm" and try to pay Jane's account because you've been scammed, the software says "Wrong, are you being defrauded?" and so you're safe 'cos you have no reason to fill out "Jane Smith" as that's not who you're intending to give money to.
We could have tried to teach all the tens of millions of UK residents that the name was ignored and so they need other safeguards, but that's not practical. Upgrading payment systems to check the name was difficult but possible.
I applaud you for your principled stance. And of course this change in Chrome changes nothing wrt your ability to continue doing this.
Equally your preference for HTTP should not stand in the way of a more secure default for the average person.
Honestly I'd prefer that my mom didn't browse any http sites, it's just safer that way. But that doesn't detract from your ability to serve unencrypted pages which can easily be intercepted or modified by an ISP (or worse.)
Third party root servers are generally used for looking up TLD nameservers, not for looking up domainnames registered to individuals publishing personal blogs^1
Fortunately, one can publish on the www without using ICANN DNS
An individual cannot even mention choosing to publish a personal blog over HTTP without being subjected to a kneejerk barrage of inane blather. This is truly a sad state of affairs
I'm experimenting with non-TLS, per packet encryption with a mechanism for built-in virtual hosting (no SNI) and collision-proof "domainnames" on the home network as a reminder that TLS is not the only way to do HTTPS
It's true we depend on ISPs for internet service but that's not a reason to let an unlimited number of _additional_ third parties intermediate and surveil everything we do over the internet
And this is why it's a good thing that every major browser will make it more and more painful, precisely so that instead of arguments about it, we'll just have people deciding whether they want their sites accessible by others or not.
Unencrypted protocols are being successfully deprecated.
They're focused on the thing that'll get the most people up and running for the least extra work from them. When you say "push" do you just mean that's the default or are they trying to get you to not use another ACME client like acme.sh or one built in to servers you run anyway or indeed rolling your own?
Like, the default for cars almost everywhere is you buy one made by some car manufacturer like Ford or Toyota or somebody, but usually making your own car is legal, it's just annoyingly difficult and so you don't do that.
As a car mechanic, you could at least tune... until these days when tou can realistically tune only 10..15 years old models, because newer ones are just locked down computers on wheels.
It's actually not that bad in most states, some even have exceptions to emissions requirements for certain classes of self-built cars.
Now, getting required insurance coverage, that can be a different story. Btu even there, many states allow you to post a bond in lieu of an insurance policy meeting state minimums.
I counted by hand, so it might be wrong, but they appear to list and link to 86 different ACME client implementations across more than a dozen languages: https://letsencrypt.org/docs/client-options/
ISRG (Let's Encrypt's parent entity) wrote Certbot, initially under the name "letsencrypt" but it was quickly renamed to be less confusing, and re-homed to the EFF rather than ISRG itself.
So, what you've said is true today, but historically Certbot's origin is tied to Let's Encrypt, which makes sense because initially ACME isn't a standard protocol, it's designed to become a standard protocol but it is still under development and the only practical server implementations are both developed by ISRG / Let's Encrypt. RFC 8555 took years.
Yes, it started that way, but complaining about the current auto-update behavior of the software (not the ACME protocol), is completely unrelated to Let's Encrypt and is instead an arbitrary design decision by someone at EFF.
As far as I remember, since the beginning certbot/let's encrypt client was a piece of crap especially regarding the autodownload and autoupdate (alias autobreak) behavior.
And I couldn't praise enough acme.sh at the opposite that is simple, dependency less and reliable!
If you think about it the spirit of the internet is based on collaboration with other parties. If you want no third parties, there's always file: and localhost.
Onion websites also don't need TLS (they have their own built-in encryption) so that solves the previous commenter's complaint too. Add in decentralized mesh networking and it might actually be possible to eliminate the dependency on an ISP too.
CAs are uniquely assertive about their right to cut off your access.
My hosting provider may accidentally fuck up, but they'll apologise and fix it.
My CA fucks up, they e-mail me at 7pm telling me I've got to fix their fuck-up for them by jumping through a bunch of hoops they have erected, and they'll only give me 16 hours to do it.
Of course, you might argue my hosting provider has a much higher chance of fucking up....
So what does "CA fixes the problem" look like in your head? Because they'll give you a new certificate right away. You have to install it, but you can automate that, and it's hard to imagine any way they could help that would be better than automation. What else do you want them to do? Asking them to not revoke incorrect or compromised certificates isn't good for maintaining security.
Imagine if, hypothetically speaking, the CA had given you a certificate based on a DNS-01 challenge, but when generating and validating the challenge record they'd forgotten to prefix it with an underscore. Which could have lead to a a certificate being issued to the wrong person if your website was a service like dyndns that lets users create custom subdomains.
Except (a) your website doesn't let users create custom subdomains; (b) as the certificate is now in use, you the certificate holder have demonstrated control over the web server as surely as a HTTP-01 challenge would; (c) you have accounts and contracts and payment information all confirming you are who you say you are; and (d) there is no suggestion whatsoever that the certificate was issued to the wrong person.
And you could have gotten a certificate for free from Lets Encrypt, if you had automatic certificate rotation in place - you paid $500 for a 12-month certificate because you don't.
An organisation with common sense policies might not need to revoke such a certificate at all, let alone revoke it with only hours of notice.
You didn't answer my question. What would the CA fixing it look like? Your hosting example had the company fix problems, not ignore them.
And have you seen how many actual security problems CAs have refused to revoke in the last few years? Holding them to their agreements is important, even if a specific mistake isn't a security problem [for specific clients]. Letting them haggle over the security impact of every mistake is much more hassle than it's worth.
> if you had automatic certificate rotation in place - you paid $500 for a 12-month certificate because you don't
Then in this hypothetical I made a mistake and I should fix it for next time.
And I should be pretty mad at my CA for giving me an invalid certificate. Was there an SLA?
CAs have to follow the baseline rules set by Google and Mozilla regarding incident response timelines. If they gave you more time, the browsers would drop them as a supported CA.
The BRs already have a deliberate carve out where a CA can notify that their government requires them to break the rules and how they'll do that, and then the browsers, on behalf of relying parties can take whatever action they deem appropriate.
If you're required to (or choose to) not tell us about it, because of active monitoring when we notice it's likely your CA will be distrusted for not telling us, this is easier because there's a mechanism to tell us about it - same way that there's a way to officially notify the US that you're a spy, so, when you don't (because duh you're a spy) you're screwed 'cos you didn't follow the rules.
The tech centralization under the US government does mean there's a vulnerability on the browser side, but I wouldn't speculate about how long that would last if there's a big problem.
Do you depend on a DNS root server to map your website name to your IP address? That's a third party.
There are ways to remove that dependency, but it's going to involve a decentralized DNS replacement like Namecoin or Handshake, many of which include their own built-in alternatives to the CA system too so if "no third parties" is something you truly care about you can probably kill two birds with one stone here.
There are dozens of us I guess that care about this kind of thing. I have never really understood the obsession with https for static content that I don't care if anyone can see I am reading like a blog post. HTTPS should be for things that matter, everything else can, and think should use HTTP when it is not necessary.
Depending on yet another third party to provide what is IMHO a luxury should not be required, and I have been continually confused as to why it is being forced down everyone's throat.
It’s static while you control it. Soon as I MIIT your content it will look to your users like you updated your site with a crypto miner and a credit card form. You can publish your site with a self-signed key if you’d like and only depend on your ISP/web host provider, DNS provider, domain registrar, and the makers of your host OS and web server and a few dozen other things.
There are good arguments for it, but it's also not a coincidence that they happen to align with Google's business objectives. Ex it's hard to issue a TLS cert without notifying Google of it.
Google also knows about every domain name that gets renewed or registered... How does knowing a website has tls help in any meaningful way that would detract from society as a whole?
If that were the universal state, then it would be easy to tell when someone was visiting a site that mattered, and you could probably infer a lot about it by looking at the cleartext of the non-HTTPS side they were viewing right before they went to it.
You can already see what site someone visits with HTTPS. It's in the Client Hello, and is important for things like L4 load balancing (e.g. HAProxy can look at the host to choose what backend to forward the TCP packets for that connection to without terminating TLS). It's also important for network operators (e.g. you at home) to be able to filter unwanted traffic (e.g. Google's).
I don't think seeing the site is too important, although there are TLS extensions to encrypt that, too[0]. In practice, a huge chunk of sites still have unique IPs, so seeing that someone is connecting to 1.2.3.4 gives you a pretty good idea of what site they're visiting. That's even easier if they're using plaintext DNS (i.e. instead of DoH) so that you can correlate "dig -t a example.com => 1.2.3.4" followed immediately by a TCP connection to 1.2.3.4. CDNs like Cloudflare can mitigate that for a lot of sites, but not millions of others.
However, the page you're fetching from that domain is encrypted, and that's vastly more sensitive. It's no big deal to visit somemedicinewebsite.com in a theocratic region like Iran or Texas. It may be a very big deal to be caught visiting somemedicinewebsite.com/effective-abortion-meds/buy. TLS blocks that bit of information. Today, it still exposes that you're looking at plannedparenthood.com, until if/when TLS_ECH catches on and becomes pervasive. That's a bummer. But you still have plausible deniability to say "I was just looking at it so I could see how evil it was", rather than having to explain why you were checking out "/schedule-an-appointment".
Yes that's why I listed a couple reasons why adopting ECH everywhere is not straightforwardly all good. The network operator one in particular is I think quite important. It happens that the same company with the largest pushes for "privacy" (Google) has also been constantly making it more difficult to make traffic transparent to the actual device owner (e.g. making it so you can't just drop a CA onto your phone and have all apps trust it). Things like DoH, ECH, and ubiquitous TLS (with half the web making an opaque connection to the same Cloudflare IPs) then become weaponized against device owners.
AFAIK it's still not that widely adopted or can be easily blocked/disabled on a network though.
That sounds like an Android issue, not a TLS issue. If I need to break TLS I can add my own CA. Not having TLS is not the solution. Google will find other ways to take control from you.
Which blog post? If it's anything remotely political or controversial, people have disappeared for that. You can always spot someone on HN who has never stepped outside their cushy life in a liberal democracy. The difference in mentality — between how "you" and "we" see the world — is crazy.
You just clearly don’t understand it is important that no one injects anything into your code while I am browsing it.
With http it is trivial.
So you say you don’t care if my ISP injects whole bunch of ads and I don’t even see your content but only the ads and I blame you for duping me into watching them.
Nowadays VPN providers are popular what if someone buys VPN service from the shitty ones and gets treated like I wrote above and it is your reputation of your blog devastated.
Or you could do a much simpler thing and support HTTPS and not expect users to change ISPs (which is not always possible, e.g. in rural areas) or change laws (which is even less realistic) to browse your (or any other) blog. Injecting ads has nothing to do with corporate MITM, it's unquestionably bad, but unrelated here.
More to the point: serving your blog with HTTPS via Let's Encrypt does not in any way forbid you from also serving it with HTTP without "depending on third parties to publish content online". It would take away from the drama of the statement though, I suppose.
It's not just your ISP, it's anyone on the entire network path, and on most networks with average security that includes any device on your local network.
Just because you don't care doesn't mean nobody cares. I don't want anyone snooping on what I browse regardless of how "safe" someone thinks it is.
My navigation habits are boring but they are mine, not anyone else's to see.
A server has no way to know whether the user cares or not, so they are not in a position to choose the user's privacy preferences.
Also: a page might be fully static, but I wouldn't want $GOVERNMENT or $ISP or $UNIVERSITY_IT_DEPARTMENT to inject propaganda, censor... Just because it's safe for you doesn't mean it's safe for everyone.
"I want my communications to be as secure as practical."
"Ah, but they're not totally secure! Which means they're totally insecure! Which means you might as well write your bank statements on postcards and mail them to the town gossip!"
It doesn't MITM anything. Do you see that as normal? Because I don't. We're adults here and I'm a tech guy, there's zero reason to control anything in my laptop.
In fact it's just a regular laptop that I fully control and installed from scratch, straight out of Apple's store. As all my company laptops have been.
And if it was company policy I would refuse indeed. I would probably not work there in the first place, huge red flag. If I really had to work there for very pressing reasons I would do zero personal browsing (which I don't do anyways).
Not even when I was an intern at random corpo my laptop was MITMed.
My work laptop has a CA from the organization installed and all HTTP(S) traffic is passed through a proxy server which filters all traffic and self-signs all domains with its' CA. It's relatively common for larger organizations. I've seen this in govt and banking.
To provide a European/Dutch perspective: I’m pretty sure that as a small employer myself, I am very much disallowed from using those mechanisms to actually inspect what employees are doing. Automated threat/virus scanning may be a legal gray zone, but monitoring-by-default is very much illegal, and there have been plenty of court cases about this. It is treated similarly to logging and reading all email, Slack messages, constantly screenrecording, or putting security cameras aimed at employees all day long. There may be exceptions for if specific fraud or abuse is suspected, but burden of proof is on the employer and just monitoring everyone is not justifiable even when working with sensitive data or goods.
So to echo a sister comment: while sadly it is common in some jurisdictions, it is definitely not normal.
I know it's common, but I don't think it's "normal" even if it has been "normalized". I wouldn't subject myself to that. If my employer doesn't trust me to act like an adult I don't think it's the place for me.
I could maybe understand it for non-tech people (virus scanning yadda yadda) but for a tech person it's a nuisance at best.
So you want to double the infrastructure surface to provide a different route for access for developers which may be a tiny portion of users in an organization? That's privilege right there.
Edit: I'm not saying I like it this way... but that's what you get when working in a small org in a larger org in a govt office. When I worked in a security team for a bank, we actually were on a separate domain and network. I generally prefer to work untrusted, externally and rely on another team for production deployment workflows, data, etc.
I'm lucky to be a dev both by trade and passion. I like my job, it's cozy, and we're still scarce enough that my employer and I are in a business relationship as equals: I'm just a business selling my services to another business under common terms (which in my case include trusting each other).
Using an employer-issued computer for anything but work for that employer is foolish, for a multitude of legal and other reasons. Privacy is just one of them.
This is still not that common but I used to work on a commercial web proxy that did exactly this. The only way it works is if the company pushes out a new root certificate via group policy (or something similar) so that the proxy can re-encrypt the data. Users can tell that this is being done by examining the certificate.
But this is mostly a waste of time, these days companies just install agents on each laptop to monitor activity. If you do not own the machine/network you are using then don’t visit sites hat you don’t want them to see.
> I have been continually confused as to why it is being forced down everyone's throat.
Have you never sat on public wifi and tried to open an http site? These days it is highly likely to be MITM'd by the wifi provider to inject ads (or worse). Even residential ISPs that one pays for cannot be trusted not to inject content, if given the opportunity, because they noticed that they are monopolies and most users cannot do anything about it.
You don't get to choose the threat model of those who visit your site.
Have you ever opened your work laptop? It is likely MITM'd so that your employer can see everything you read and post on the internet and HTTPS won't help you.
Have you never sat on public wifi and tried to open an http site? These days it is highly likely to be MITM'd by the wifi provider to inject ads (or worse).
I honestly don't remember a single case where that happened to me. Internet user since 1997.
Agreed. I think that the push to make everything HTTPS is completely unnecessary, and in fact counterproductive to security. By throwing scary warnings in front of users when there is no actual security threat, we teach users that the scary warnings don't matter and they just should click past them. Warning when a site doesn't use TLS is a clear cut case of crying wolf.
Doesn't that mean that technically, any node in the network between you and your reader can mutate the contents of the blog in-transit without anyone being the wiser (up to and including arbitrary JavaScript inline injection)?
Devil's advocate, but maybe ISPs should all inject ads to make a point. They make money, and anyone using HTTP gets taught a free lesson on what MITM means
This puts the question into my brain, which I have never thought to pursue, of whether you could offer a self-signed cert that the user has to install for HTTPS.
Their client will complain loudly until and unless they install it, but then for those who care you could offer the best of both worlds.
Almost certainly more trouble than it's worth. G'ah, and me without any free time to pursue a weekend hobby project!
For a blog, i think the bigger risk is pervasive surveilence - gov reads all the connections and puts you on a list if the thing you are reading has the wrong keyword in it.
What is funny about HTTPS is that early arguments for its existence IIRC were often along the lines of protecting credit card numbers and personal information that needed to be sent during e-commerce
HTTPS may have delivered on this promise. Of course HTTPS is needed for e-commerce. But not all web use is commercial transactions
Today, it's unclear who or what^2 HTTPS is really protecting anymore
For example,
- web users' credit card numbers are widely available, sold on black markets to anyone; "data breaches" have become so common that few people ask why the information was being collected and stored in the first place nor do they seek recourse
- web users' personal information is routinely exfiltrated during web use that is not e-commerce, often to be used in association with advertising services; perhaps the third parties conducting this data collection do not want the traffic to be optionally inspected by web users or competitors in the ad services business
- web users' personal information is shared from one third party to another, e.g., to "data brokers", who operate in relative obscurity, working against the interests of the web users
All this despite "widespread use of encryption", at least for data in transit, where the encryption is generally managed by third parties
When the primary use of third-party mediated HTTPS is to protect data collection, telemetry, surveillance and ad services delivery,^1 it is difficult for me to accept that HTTPS as implemented is primarily for protecting web users. It may benefit some third parties financially, e.g., CA and domainname profiteers, and it may protect the operations of so-called "tech" companies though
Personal information and behavioral data are surreptitiously exfiltrated by so-called "tech" companies whilst the so-called "tech" company's "secrets", e.g., what data they collect, generally remain protected. The companies deal in information they do not own yet operate in secrecy from its owners, relentlessly defending against any requests for transparency
1. One frequent argument for the use of HTTPS put forth by HN commenters has been that it prevents injection of ads into web pages by ISPs. Yet the so-called "tech" companies are making a "business" out of essentially the same thing: injecting ads, e.g., via real-time auctions, into web pages. It appears to this reader that in this context HTTPS is protecting the "business" of the so-called "tech" companies from competition by ISPs. Some web users do not want _any_ ads, whether from ISPs or so-called "tech" companies
2. I monitor all HTTPS traffic over the networks I own using a local forward proxy. There is no plaintext HTTP traffic leaving the network unless I permit it for a specific website in the proxy config. The proxy forces all traffic over HTTPS
If HTTPS were optionally under user control, certainly I would be monitoring HTTPS traffic being automatically sent from own computers on own network to Google by Chrome, Android, YouTube and so on. As I would for all so-called "tech" companies doing data collection, surveillance and/or ad services as a "business"
Ideally one would be able to make an informed decision whether they want to send certain information to companies like Google. But as it stands, with the traffic sometimes being protected from inspection _by the computer owner_, through use of third party-mediated certificates, the computer owner is prevented from knowing what information is being sent
While Google and friends are happy to push for https, it’s dramatically easier to scam people via ads or AI generated content. Claiming plain HTTP is scary seems like a straw man tbh
The threat model of HTTP isn't site owners, it's that anyone else can change the content and you can't tell that it didn't come from the original site.
It's not a strawman, it's a real attack that we've seen for decades.
The entire guidance of "don't connect to an open wireless AP"? That's because a malicious actor who controlled the AP could read and modify your HTTP traffic - inject ads, read your passwords, update the account number you requested your money be transferred to. The vast majority of that threat is gone if you're using HTTPS instead of HTTP.
Then perhaps the problem is open APs? There are still legitimate uses for HTTP including reading static content.
Say we all move to HTTPS but then let’s encrypt goes away, certificate authority corps merge, and then google decides they also want remote attestation for two way trust or whatever - the whole world becomes walled up into an iOS situation. Even a good idea is potentially very bad at the hands of unregulated corps (and this is not a hypothetical)
> There are still legitimate uses for HTTP including reading static content.
This can still be MITM'd. Maybe they can't drain your bank account by the nature of the content, but they can still lie or something. And that's not good.
Or more problematically, inject a bunch of ads that lead users on to scams.
It would be ideal if people only browsed from trusted networks, but telling people "don't do the convenient, useful, obvious thing" only goes so far. Hence the desire to secure connections from another angle.
The problem in the above was not actually caused by the AP being open, nor is it just limited to APs in the path between you and whatever you're trying to connect to on the internet. Another common example is ISPs which inject content banners into unencrypted pages (sometimes for billing/usage alerts, other times for ads). Again, this is just another example - you aren't going to whack-a-mole an answer to trusting everything a user might transit on the internet, that's how we came to HTTPS instead.
> There are still legitimate uses for HTTP including reading static content.
There are valid reasons to do a lot of things which don't end up making sense to support in the overall view.
> Say we all move to HTTPS but then let’s encrypt goes away, certificate authority corps merge, and then google decides they also want remote attestation for two way trust or whatever - the whole world becomes walled up into an iOS situation. Even a good idea is potentially very bad at the hands of unregulated corps (and this is not a hypothetical)
There are at least 2 other decent sized independent ACME operators at this point, but say all of the certificate authority corps merge but we planned ahead and kept HTTP support: our banking/payments, sites with passwords, sites with PII, medical sites, etc is in a stranglehold but someone's plain text blog post about it will be accessible without a warning message. Not exactly a great victory, we'll still need to solve the actual problem just as desperately at that point.
.
The biggest gripe I have with the way browsers go about this is they only half consider the private use cases, and you get stuck with the rough edges. E.g. here they call private addresses out to not get a warning, but my (fully in browser, single page) tech support dump reader can't work when opened as a file:/// because the browser built-in for calculating an HMAC (part of WebCrypto) is for secure contexts only, and file:/// doesn't qualify. Apart from being stupid because they aren't getting rid of JavaScript support on file:/// origins until they just get rid of file:/// completely and it just means I need a shim, it's also stupid because file:/// is no less a secure origin than localhost.
I'd like for every possible "unsecure" private use case to work, but I (and the majority of those who uses a browser) also has a conflicting want to connect to public websites securely. The options and impacts for these conflicting desires have to be weighed and thought through.
The only challenge to https, as compared to http, is certificates. If not for certificates I could roll out a server with https absolutely anywhere in seconds including localhost and internal intranets.
On another note I would much prefer to skip https, as the default, and go straight to WSS (TLS WebSockets). WebSockets are superior to HTTP in absolutely every regard except that HTTP is session-less.
I distinctly remember trying to sign up for Pandora’s premium plan back in 2012 and their credit card form being served and processed over HTTP. I emailed them telling them that I wanted to give them my money if they would just fix the form. They never got back to me or fix it for several more years while I gave my money to Spotify. Back then HTTPS was NOT the norm and it was a battle to switch people to it. Yes it is annoying for internal networks and a few other things but it is necessary.
There is likely zero chance the OP's recollection is remotely correct. Pandora went public in 2011 with 80 million users, the chances of a publicly listed company of this size taking payments over HTTP in 2012 are about as close to zero as can be. If nothing else, their payment processor would drop them as a customer.
In the early to mid 2000s I would believe this. But for a major e-commerce provider in 2012? That seems vanishing improbable.
PCI DSS is the data security standard required by credit card processors for you to be able to accept credit card payments online. Since version 1.0 came out in 2004, Requirement 4.1 has been there, requiring encrypted connections when transmitting card holder information.
There’s certainly was a time when you had two parts of a commerce website: one site all of the product stuff and catalogs and categories and descriptions which are all served over HTTP (www.shop.com) and then usually an entirely separate domain (secure.shop.com) where are the actual checkout process started that used SSL/TLS. This was due to the overhead of SSL in the early 2000s and the cost of certificates. This largely went away once Intel processors got hardware accelerated instructions for things like AES, certificates became more cost-effective, and then let’s encrypt made it simple.
Occasionally during the 2000s and 2010s you might see HTML form that were served over HTTP and the target was an HTTPS URL but even that was rare simply because it was a lot of work to make it that complex instead of having the checkout button just take you to an entirely different site
> What's worse, many plaintext HTTP connections today are entirely invisible to users, as HTTP sites may immediately redirect to HTTPS sites. That gives users no opportunity to see Chrome's "Not Secure" URL bar warnings after the risk has occurred, and no opportunity to keep themselves safe in the first place.
What is the risk exactly? A man-in-the-middle redirect to a malicious https site?
HTTPS sadly offers no protection from that at all these days. At least in the past when something was HTTPS you knew that someone had to jump through some hoops and pay some real money to earn that padlock, but now any script kiddie can automate certificates for free for as many lookalike domains they want to.
It would be nice to see some way for browsers to indicate when a site has some extra validation so you could immediately see that your bank has a real certificate as is appropriate for a bank and not just Let's Encrypt. Yes, I can click the padlock icon to get that information, but it would be nice if there was some light warning for free certificates to make it more immediately obvious.
This is to be honest a little unfortunate. While Https is very important, do we really need to verify that Blog X that I may read once a year is really who they say they are? For many sites it doesn't make a lot of sense but we are here due to human nature
The problem is not the site, but the network in the middle. On-path attackers typically don't care about which site they MITM in order to inject javascript e.g. to show ads, insert tracking tokens or hijack the browser for other purposes. The site is the vector, not the target.
Sounds like a great argument for keeping js disabled in my browser. Because "httpS://" does nothing whatever to sanitize the js that it delivers. And one perfectly legit site may pull in js from two dozen or more different servers. Zero of which are magically guaranteed to only deliver benevolent code.
Vs. `traceroute` suggests that would-be on-path attackers are up against a vastly smaller attack surface.
Doesn't it already do this? I keep a domain or two on HTTP to force network-level auth flows (which don't always fire correctly when hitting HTTPS) and I've gotten warnings from Chrome about those sites every time for years... Only if I've been to the site recently does the warning not show up.
Right now it only shows a little bubble in the URL bar saying "Not Secure", I think. (So, that is a "warning", in a sense.) TFA is saying there will now be an interstitial if you attempt an HTTP connection.
HSTS might also interact with this, but I'd expect an HSTS site to just cause Chrome to go for HTTPS (and then that connection would either succeed or fail).
> to force network-level auth flows (which don't always fire correctly when hitting HTTPS)
The whole point of HTTPS is basically that these shouldn't work, essentially. Vendors need to stop implementing weird network-level auths by MitM'ing the connection, and DHCP has an option to signal to someone joining a network that they need to go to a URL to do authentication. These MitM-ers are a scourge, and often cause a litany of poor behavior in applications…
Chrome has shown the HTTP warning in Incognito mode for about a year, and has shown the warning if you're in Advanced Protection mode for about 2-3 years.
> I used to test with http://neverssl.com/ until they added HTTPS for some reason.
My first reaction was along the lines of "What? That can't possibly be right..."
After testing a bit, it looks like you can load https://neverssl.com but it'll just redirect you to a non-https subdomain. OTOH, if the initial load before redirecting is HTTPS then it shouldn't work on hotel wifi or whatever, so still seems like it defeats the purpose.
neverssl added an HTTPS version for browsers that automatically connect to HTTPS when entering a domain name (like Chrome probably will after this change, eventually). The HTTPS version of the site uses Javascript to load a random http:// subdomain of neverssl.com so automatic HTTPS redirects are still defeated.
http.rip will probably show a "website unavailable" error at some point unless you manually type in the http:// prefix.
Prediction: Wifi captive portal vendors will not react to this until after 90% of their customerbase has their funding dry up.
It is incredibly common for public wifi captive portals to be built on a stack of hacks, some of which require the inspection of HTTP and DNS requests to function.
*Yes better tools exist, but they dont arent commonly used, and require Portal, WAP and Client support. Most vendors just tell people to turn new fancy shit off, disable HTTPS and proceed with HTTP.
To be fair, most people connecting to captive portal networks are more likely to be doing so on their phones, and I don't think IOS even allows non-Safari browsers for captive Wi-Fi login. I'm unsure how they'll fix this for Android though.
What are you talking about? You can easily build the captive portals by setting up a custom DNS server, and HTTPS has nothing to do with it! In fact, local networks have been doing this very thing for years now. Apple even supports Detecting this interception so the operating system can show a captive portal to the user. The OS maker gives network admins an official a way to enforce captive portals, and it’s not going away with https.
http://www.slackware.com/ is probably the biggest website I'm aware of that does not serve encrypted traffic[1]. but there are a few other legitimately useful resources that don't encrypt.
While this is great for end users, this just shows again what kind of monopoly Google has over the web by owning Chrome.
I work at a company that also happens to run a CDN and the sheer amount of layers Google forces everyone to put onto their stack, which was a very simple text based protocol, is mind boggling.
First there was simple TCP+HTTP. Then HTTPS came around, adding a lot of CPU load onto servers. Then they invented SPDY which became HTTP2, because websites exploded in asset use (mostly JS). Then they reinvented the layer 4 with QUIC (in-house first), which resulted in HTTP3. Now this.
Each of them adding more complexity and data framing onto, what used to be a simple message/file exchange protocol.
And you can not opt out, because customers put their websites into a website checker and want to see all green traffic lights.
Mmmm, great that and mandatory key rotation every 90 days, plus needing to get a cert from an approved CA, means just that more busy work to have an independent web presence.
I don't like people externalizing their security policy preferences. Yes this might be more secure for a class of use-cases, but I as a user should be allowed to decide my threat model. It's not like these initiatives really solve the risks posed by bad actors. We have so much compliance theater around email, and we still have exactly the same threats and issues as existed twenty years ago.
HTTPS doesn't have mandatory key rotation every 90 days. LetsEncrypt does for reasons that they document, but you can go elsewhere if you'd prefer.
> I as a user should be allowed to decide my threat model
Asking you if you want to proceed is allowing you to decide your threat model.
> We have so much compliance theater around email, and we still have exactly the same threats and issues as existed twenty years ago.
...and yet we have largely eliminated entire classes of issue on the web with the shift to HTTPS, to the point where asking users to opt-in to HTTP traffic is actually a practical option, raising the default security posture with minimal downside.
> HTTPS doesn't have mandatory key rotation every 90 days. LetsEncrypt does for reasons that they document, but you can go elsewhere if you'd prefer.
A lot of this discussion is about how the browsers define their security requirements on top of HTTPS/TLS/etc.
Such as what CAs they trust by default, and what’s the maximum lifetime of a certificate before they won’t trust it. I believe it is now 2 years? Going even lower soon.
This is what things should be like chatting programs and end-to-end encryption.
But in every case by the way, we kinda trust the makers of this software. They can easily ship backdoors to specific users. Same with crypto wallets etc.
What's worse, many plaintext HTTP connections today are entirely invisible to users, as HTTP sites may immediately redirect to HTTPS sites. That gives users no opportunity to see Chrome's "Not Secure" URL bar warnings after the risk has occurred, and no opportunity to keep themselves safe in the first place.
Two hosting providers I use only offer HTTP redirects (one being so bad it serves up a self signed cert on the redirect if you attempt HTTPS) so hopefully this kicks them into gear to offer proper secure redirects.
Usually, completing the domain name by adding the final period will do the job. Instead of entering myprinter into the address bar, try myprinter. so your DNS server doesn't try to resolve myprinter, myprinter.domain, myprinter.domain.tld, and whatever other search domains have been configured. A real, fully-qualified domain ends in a period, though most tools will happily let you avoid that final period.
Alternatively, .local domains will work for mDNS-capable devices (and non-mDNS-capable devices if you like to risk things breaking randomly), and the .internal TLD has been reserved so .internal domains should also work for local addresses.
I'm sure there will be a setting flag to stop blocking http sites, or maybe even a domain exclusion which will let you set up your intranet to work on http.
You may not want to, but you can use public certs and URLs on your intranet. You can't necessarily do http-01 challenges, but DNS based challenges are feasible. There are also other ACME providers which will let you skip challenges for DCVd domains.
HTTPS is great. HTTPS without HTTP is terrible for many human person use cases. Pretending those use cases don't exist is anti-human. But for corporate person use cases HTTPS-only is more than fine, it's required. So they'll force it on us all in all contexts. But in our own personal setups we can chose to be the change we want to see in the world and run HTTP+HTTPS. Even if most of the web becomes an HTTPS-only ID-centric corporate wasteland it doesn't take that many people to make a real web. It existed before them and still does. There's more human's websites out there now then ever. It's just getting harder and harder to find and see using their search and browser defaults. It's not okay, but maybe this is finally a solution to eternal september and we can all just live peacefully on TCP/IP HTTP/1.1 HTTP+HTTPS with HTML while corporate persons diverge off into UDP-land with HTTP/3 HTTPS-only CA TLS only QUIC for delivering javascript applications.
Anyone have a good recipe for setting up an HTTPS for one-off experiments in localhost? I generally don't because there isn't much of a compromise story there, but it's always been a security weakness in how I do tests and if Chrome is going to start reminding me stridently I should probably bother to fix it.
How exactly are unencrypted localhost connections a security weakness? To intercept the data on a loopback connection you'd need a level of access where encryption wouldn't really add much privacy.
They mention it later in the article; if they drop connections to internal networks from the graph, Linux shoots up all the way to 97%.
The answer is probably that people that run Linux are far more likely to run a homelab intranet that isn't secured by HTTPS, because internal IP addresses and hostnames are a hassle to get certificates for. (Not to mention that it's slightly pointless on most intranets to use HTTPS.)
> If you exclude navigations to private sites, then the distribution becomes much tighter across platforms. In particular, Linux jumps from 84% HTTPS to nearly 97% HTTPS when limiting the analysis to public sites only.
Sounds like it's just because a large chunk of Linux usage is for web interfaces on the local machine or network, rather than everyday web browsing.
Silly question and one I should probably already know the answer to but never really got around to thinking through: are there practical concerns for not doing TLS in your home intranet?
It means that if someone has patched into your local network they can access anything in there, but they have to get in first, right? So how concerned should one be in these scenarios
(a) one has wifi with WPA2 enabled
(b) there's a Verizon-style router to the outside world but everything is wired on the house side?
Main reason is that it's hard to get certificates for intranets that all devices will properly trust.
Public CAs don't issue (free) certificates for internal hostnames and running your own CA has the drawback that Android doesn't allow you to "properly" use a personal CA without root, splitting it's CA list between the automatically trusted system CA list and the per-application opt-in user CA list. (It ought to be noted that Apple's personal CA installation method uses MDM, which is treated like a system CA list). There's also random/weird one-offs like how Firefox doesn't respect the system certificate store, so you need to import your CA certificate separately in Firefox.
The only real option without running into all those problems is to get a regular (sub)domain name and issue certificates for that, but that usually isn't free or easy. Not to mention that if you do the SSL flow "properly", you need to issue one certificate for each device, which leaks your entire intranet to the certificate transparency log (this is the problem with Tailscale's MagicDNS as a solution). Alternatively you need to issue a wildcard certificate for your domains, but that means that every device in your intranet can have a valid SSL certificate for any other domain name on your certificate.
> You can get $2/yr domain names on weird TLDs like .site, .cam, .link, ...
You can, but as stated - that's not free (or easy). That's still yet another fee you have to pay for... which hurts adoption of HTTPS for intranets (not to mention it's not really an intranet if it's reliant on something entirely outside of that intranet.)
If LetsEncrypt charged 1$ to issue/renew a certificate, they wouldn't have made a dent in the public adoption of HTTPS certificates.
> Not necessarily, you don't route the domain externally, and use offline DNS challenge/request to renew the certificate.
I already mentioned that one, that's the wildcard method.
Perhaps you might worry about hostile IOT doodads snooping on things that arent their business or making insecure public webpages with UPNP. If it is just devices you truly control and you never expose an unhardened device, then a walled garden can be fine.
Also, if WPA2 ever becomes extremely broken. There was a period of 3-5 yrs where WEP was taking forever to die at the same time that https was taking forever to become commonplace and you could easily join networks and steal facebook credentials out of the air. If you lived in an apartment building and had an account get hacked between maybe 2008-2011, you were probably affected by this.
We need to replace the DNS system with a blockchain-based alternative where people can own domains on-chain without renewal fees. The public key used to encrypt data would be shown alongside the IP addresses registered for that domain name (same record). The owner of the domain (an NFT) would be able to change their public encryption key at will on-chain. They would only pay a fee when they want to perform some write action; holding the domain and lookups would be free forever. No need to separate DNS from the certification and encryption layer. You know the private key, you own the domain. So much cleaner than the mess we have now.
If someone doesn't like it, they can stay behind on the old DNS system or they can launch a new blockchain with their own version of reality... It's retarded that we need to have one version of reality for the entire planet. If someone in China wants to own facebook.com, they should be allowed. Heck, it could be a separate silo per city. The age of copyright and trademark is over. I don't see AI companies distributing royalties to people who wrote its training set...
I have had HTTPS-by-default for years and I can say that we're past the point where there's noticeable year-to-year change for which sites aren't HTTPS. It's almost always old stuff that pre-dates Let's Encrypt (and presumably just nobody ever added HTTPS). The news site which stopped updating in 2007, the blog somebody last posted to in 2011, that sort of thing.
I think it's important to emphasise that although Tim's toy hypermedia system (the "World Wide Web") didn't come with baked in security, ordinary users have never really understood that. It seems to them as though http://foo.example/ must be guaranteed to be foo.example, just making that true by upgrading to HTTPS is way easier than somehow teaching billions of people that it wasn't true and then what they ought to do about that.
I am reminded of the UK's APP scams. "Authorized Push Payment" was a situation where ordinary people think they're paying say, "Big Law Firm" but actually a scammer persuaded them to give money to an account they control because historically the UK's payment systems didn't care about names, so to it a payment to "Big Law Firm" acct #123456789 is the same as a payment to "Jane Smith" acct #123456789 even though you'd never get a bank to open you an account in the name of "Big Law Firm" without documents the scammer doesn't have. To fix this, today's UK payment systems treat the name as a required match not merely for your records, so when you say "Big Law Firm" and try to pay Jane's account because you've been scammed, the software says "Wrong, are you being defrauded?" and so you're safe 'cos you have no reason to fill out "Jane Smith" as that's not who you're intending to give money to.
We could have tried to teach all the tens of millions of UK residents that the name was ignored and so they need other safeguards, but that's not practical. Upgrading payment systems to check the name was difficult but possible.
I run my blog in unencrypted HTTP/1.1 just to make a point that we do not have to depend on third parties to publish content online.
And I noticed that Whatsapp is even worse than Chrome, it opens HTTPS even if I share HTTP links.
I applaud you for your principled stance. And of course this change in Chrome changes nothing wrt your ability to continue doing this.
Equally your preference for HTTP should not stand in the way of a more secure default for the average person.
Honestly I'd prefer that my mom didn't browse any http sites, it's just safer that way. But that doesn't detract from your ability to serve unencrypted pages which can easily be intercepted or modified by an ISP (or worse.)
Depend on one less third party, you still depend on the DNS Root servers, your ISP / hosting, domain registry, etc.
Third party root servers are generally used for looking up TLD nameservers, not for looking up domainnames registered to individuals publishing personal blogs^1
Fortunately, one can publish on the www without using ICANN DNS
For example http://199.233.217.201 or https://199.233.217.201
1. I have run own root server for over 15 years
An individual cannot even mention choosing to publish a personal blog over HTTP without being subjected to a kneejerk barrage of inane blather. This is truly a sad state of affairs
I'm experimenting with non-TLS, per packet encryption with a mechanism for built-in virtual hosting (no SNI) and collision-proof "domainnames" on the home network as a reminder that TLS is not the only way to do HTTPS
It's true we depend on ISPs for internet service but that's not a reason to let an unlimited number of _additional_ third parties intermediate and surveil everything we do over the internet
> inane blather
And this is why it's a good thing that every major browser will make it more and more painful, precisely so that instead of arguments about it, we'll just have people deciding whether they want their sites accessible by others or not.
Unencrypted protocols are being successfully deprecated.
You have some weird definition of "root".
https://en.wikipedia.org/wiki/Alternative_DNS_root, so you could (and people have/are) run your own root server.
Let's Encrypt pushes me to run its self-updating certbot on my personal server, which is a big no-go.
I know about acme.sh, but still...
They're focused on the thing that'll get the most people up and running for the least extra work from them. When you say "push" do you just mean that's the default or are they trying to get you to not use another ACME client like acme.sh or one built in to servers you run anyway or indeed rolling your own?
Like, the default for cars almost everywhere is you buy one made by some car manufacturer like Ford or Toyota or somebody, but usually making your own car is legal, it's just annoyingly difficult and so you don't do that.
As a car mechanic, you could at least tune... until these days when tou can realistically tune only 10..15 years old models, because newer ones are just locked down computers on wheels.
>usually making your own car is legal
It may be legal but good luck ever getting registration for it.
It's actually not that bad in most states, some even have exceptions to emissions requirements for certain classes of self-built cars.
Now, getting required insurance coverage, that can be a different story. Btu even there, many states allow you to post a bond in lieu of an insurance policy meeting state minimums.
I counted by hand, so it might be wrong, but they appear to list and link to 86 different ACME client implementations across more than a dozen languages: https://letsencrypt.org/docs/client-options/
I've used their stuff since it came out and never used certbot, FWIW. If I were to set something up today, I'd probably use https://github.com/dehydrated-io/dehydrated.
Plus, it's one of the easier protocols to implement. I implemented it myself, and it didn't take long.
So you're absolutely not dependent on the client software, or indeed anyone else's client software.
There is a plethora of other clients besides certbot or acme.sh.
Let's Encrypt does not write or maintain certbot
ISRG (Let's Encrypt's parent entity) wrote Certbot, initially under the name "letsencrypt" but it was quickly renamed to be less confusing, and re-homed to the EFF rather than ISRG itself.
So, what you've said is true today, but historically Certbot's origin is tied to Let's Encrypt, which makes sense because initially ACME isn't a standard protocol, it's designed to become a standard protocol but it is still under development and the only practical server implementations are both developed by ISRG / Let's Encrypt. RFC 8555 took years.
Yes, it started that way, but complaining about the current auto-update behavior of the software (not the ACME protocol), is completely unrelated to Let's Encrypt and is instead an arbitrary design decision by someone at EFF.
As far as I remember, since the beginning certbot/let's encrypt client was a piece of crap especially regarding the autodownload and autoupdate (alias autobreak) behavior.
And I couldn't praise enough acme.sh at the opposite that is simple, dependency less and reliable!
If you think about it the spirit of the internet is based on collaboration with other parties. If you want no third parties, there's always file: and localhost.
Host an onion website at home using solar energy, and the only third party your website will depend on is your internet provider :)
Onion websites also don't need TLS (they have their own built-in encryption) so that solves the previous commenter's complaint too. Add in decentralized mesh networking and it might actually be possible to eliminate the dependency on an ISP too.
> they have their own built-in encryption
What does this mean? Is that encryption not reliant on any third parties, or is it just relying on different third parties?
What about the Tor directory authorities?
There is no magic do it all yourself. Communicating with people implies dependence.
I gave up trying to build a solar panel.
And an army of volunteers and feds to run relays
What about all the third parties running relays and exit nodes?
CAs are uniquely assertive about their right to cut off your access.
My hosting provider may accidentally fuck up, but they'll apologise and fix it.
My CA fucks up, they e-mail me at 7pm telling me I've got to fix their fuck-up for them by jumping through a bunch of hoops they have erected, and they'll only give me 16 hours to do it.
Of course, you might argue my hosting provider has a much higher chance of fucking up....
So what does "CA fixes the problem" look like in your head? Because they'll give you a new certificate right away. You have to install it, but you can automate that, and it's hard to imagine any way they could help that would be better than automation. What else do you want them to do? Asking them to not revoke incorrect or compromised certificates isn't good for maintaining security.
Imagine if, hypothetically speaking, the CA had given you a certificate based on a DNS-01 challenge, but when generating and validating the challenge record they'd forgotten to prefix it with an underscore. Which could have lead to a a certificate being issued to the wrong person if your website was a service like dyndns that lets users create custom subdomains.
Except (a) your website doesn't let users create custom subdomains; (b) as the certificate is now in use, you the certificate holder have demonstrated control over the web server as surely as a HTTP-01 challenge would; (c) you have accounts and contracts and payment information all confirming you are who you say you are; and (d) there is no suggestion whatsoever that the certificate was issued to the wrong person.
And you could have gotten a certificate for free from Lets Encrypt, if you had automatic certificate rotation in place - you paid $500 for a 12-month certificate because you don't.
An organisation with common sense policies might not need to revoke such a certificate at all, let alone revoke it with only hours of notice.
You didn't answer my question. What would the CA fixing it look like? Your hosting example had the company fix problems, not ignore them.
And have you seen how many actual security problems CAs have refused to revoke in the last few years? Holding them to their agreements is important, even if a specific mistake isn't a security problem [for specific clients]. Letting them haggle over the security impact of every mistake is much more hassle than it's worth.
> if you had automatic certificate rotation in place - you paid $500 for a 12-month certificate because you don't
Then in this hypothetical I made a mistake and I should fix it for next time.
And I should be pretty mad at my CA for giving me an invalid certificate. Was there an SLA?
CAs have to follow the baseline rules set by Google and Mozilla regarding incident response timelines. If they gave you more time, the browsers would drop them as a supported CA.
The CAs have to follow the baseline rules set by the CA/Browser Forum which CAs are voting members of.
Mark my words, some day soon an enterprising politician will notice the CA system can be drawn into trade sanctions against the enemy of the day....
The BRs already have a deliberate carve out where a CA can notify that their government requires them to break the rules and how they'll do that, and then the browsers, on behalf of relying parties can take whatever action they deem appropriate.
If you're required to (or choose to) not tell us about it, because of active monitoring when we notice it's likely your CA will be distrusted for not telling us, this is easier because there's a mechanism to tell us about it - same way that there's a way to officially notify the US that you're a spy, so, when you don't (because duh you're a spy) you're screwed 'cos you didn't follow the rules.
The tech centralization under the US government does mean there's a vulnerability on the browser side, but I wouldn't speculate about how long that would last if there's a big problem.
why not make the point with at least a self signed cert?
99% of visitors wouldn't get the intended point - they'd think he's pro-cert, but forgot to renew it or something.
Do you depend on a DNS root server to map your website name to your IP address? That's a third party.
There are ways to remove that dependency, but it's going to involve a decentralized DNS replacement like Namecoin or Handshake, many of which include their own built-in alternatives to the CA system too so if "no third parties" is something you truly care about you can probably kill two birds with one stone here.
Registrar is the big one, if yours decides to do a Google and randomly ban you and automatically decline your appeal with AI, you're stuffed.
This is generally my biggest concern. Not that I'm doing anything shady, I've wanted to setup a potentially politically charged site in the past.
I made this to redirect HTTPS to HTTP with whatsapp:
https://multiplayeronlinestandard.com/goto.html (the reason for the domain is I will never waste time on HTTPS but github does it automatically for free up to 100GB/month)
There are dozens of us I guess that care about this kind of thing. I have never really understood the obsession with https for static content that I don't care if anyone can see I am reading like a blog post. HTTPS should be for things that matter, everything else can, and think should use HTTP when it is not necessary.
Depending on yet another third party to provide what is IMHO a luxury should not be required, and I have been continually confused as to why it is being forced down everyone's throat.
It’s static while you control it. Soon as I MIIT your content it will look to your users like you updated your site with a crypto miner and a credit card form. You can publish your site with a self-signed key if you’d like and only depend on your ISP/web host provider, DNS provider, domain registrar, and the makers of your host OS and web server and a few dozen other things.
> MIIT
Man in in the?
Man In Icy Tundra
Typos happen :)
There are good arguments for it, but it's also not a coincidence that they happen to align with Google's business objectives. Ex it's hard to issue a TLS cert without notifying Google of it.
I don't get your logic/reasoning here... could you explain?
There are public logs of every TLS cert issued by the major providers. This benefits Google.
Kinda like how Wikipedia benefits Google. Or public roads benefit Uber. Or clean water benefits restaurants
Google also knows about every domain name that gets renewed or registered... How does knowing a website has tls help in any meaningful way that would detract from society as a whole?
> HTTPS should be for things that matter
If that were the universal state, then it would be easy to tell when someone was visiting a site that mattered, and you could probably infer a lot about it by looking at the cleartext of the non-HTTPS side they were viewing right before they went to it.
You can already see what site someone visits with HTTPS. It's in the Client Hello, and is important for things like L4 load balancing (e.g. HAProxy can look at the host to choose what backend to forward the TCP packets for that connection to without terminating TLS). It's also important for network operators (e.g. you at home) to be able to filter unwanted traffic (e.g. Google's).
I don't think seeing the site is too important, although there are TLS extensions to encrypt that, too[0]. In practice, a huge chunk of sites still have unique IPs, so seeing that someone is connecting to 1.2.3.4 gives you a pretty good idea of what site they're visiting. That's even easier if they're using plaintext DNS (i.e. instead of DoH) so that you can correlate "dig -t a example.com => 1.2.3.4" followed immediately by a TCP connection to 1.2.3.4. CDNs like Cloudflare can mitigate that for a lot of sites, but not millions of others.
However, the page you're fetching from that domain is encrypted, and that's vastly more sensitive. It's no big deal to visit somemedicinewebsite.com in a theocratic region like Iran or Texas. It may be a very big deal to be caught visiting somemedicinewebsite.com/effective-abortion-meds/buy. TLS blocks that bit of information. Today, it still exposes that you're looking at plannedparenthood.com, until if/when TLS_ECH catches on and becomes pervasive. That's a bummer. But you still have plausible deniability to say "I was just looking at it so I could see how evil it was", rather than having to explain why you were checking out "/schedule-an-appointment".
[0]https://developers.cloudflare.com/ssl/edge-certificates/ech/
The CIA’s website was a very early adopter of HTTPS across the board, for this very reason.
Most of the site hosted general information about the agency and its functions, but they also had a section where you could provide information.
Great point, and an excellent illustration. If it was trivial for an adversary to see that some people were visiting http://cia.gov/visitor-center-and-gift-shop-hours, but others were visiting https://cia.gov/[we-can't-see-this-part], they'd know exactly who to concentrate their rubber hose cryptography efforts on.
https://blog.cloudflare.com/announcing-encrypted-client-hell...
Yes that's why I listed a couple reasons why adopting ECH everywhere is not straightforwardly all good. The network operator one in particular is I think quite important. It happens that the same company with the largest pushes for "privacy" (Google) has also been constantly making it more difficult to make traffic transparent to the actual device owner (e.g. making it so you can't just drop a CA onto your phone and have all apps trust it). Things like DoH, ECH, and ubiquitous TLS (with half the web making an opaque connection to the same Cloudflare IPs) then become weaponized against device owners.
AFAIK it's still not that widely adopted or can be easily blocked/disabled on a network though.
That sounds like an Android issue, not a TLS issue. If I need to break TLS I can add my own CA. Not having TLS is not the solution. Google will find other ways to take control from you.
Which blog post? If it's anything remotely political or controversial, people have disappeared for that. You can always spot someone on HN who has never stepped outside their cushy life in a liberal democracy. The difference in mentality — between how "you" and "we" see the world — is crazy.
You just clearly don’t understand it is important that no one injects anything into your code while I am browsing it.
With http it is trivial.
So you say you don’t care if my ISP injects whole bunch of ads and I don’t even see your content but only the ads and I blame you for duping me into watching them.
Nowadays VPN providers are popular what if someone buys VPN service from the shitty ones and gets treated like I wrote above and it is your reputation of your blog devastated.
My ISP does not and if yours does, vote with your money or lobby your government to make this illegal.
And while at it, lobby to make corporate MiTM tools illegal as well.
Because if you are bothered about my little blog, you should be bothered that your employer can inspect all your HTTPS traffic.
Or you could do a much simpler thing and support HTTPS and not expect users to change ISPs (which is not always possible, e.g. in rural areas) or change laws (which is even less realistic) to browse your (or any other) blog. Injecting ads has nothing to do with corporate MITM, it's unquestionably bad, but unrelated here.
More to the point: serving your blog with HTTPS via Let's Encrypt does not in any way forbid you from also serving it with HTTP without "depending on third parties to publish content online". It would take away from the drama of the statement though, I suppose.
It's not just your ISP, it's anyone on the entire network path, and on most networks with average security that includes any device on your local network.
Just because you don't care doesn't mean nobody cares. I don't want anyone snooping on what I browse regardless of how "safe" someone thinks it is.
My navigation habits are boring but they are mine, not anyone else's to see.
A server has no way to know whether the user cares or not, so they are not in a position to choose the user's privacy preferences.
Also: a page might be fully static, but I wouldn't want $GOVERNMENT or $ISP or $UNIVERSITY_IT_DEPARTMENT to inject propaganda, censor... Just because it's safe for you doesn't mean it's safe for everyone.
And so we got The Usual Conversation:
"I want my communications to be as secure as practical."
"Ah, but they're not totally secure! Which means they're totally insecure! Which means you might as well write your bank statements on postcards and mail them to the town gossip!"
It amazes me how anti-HTTPS some people can be.
So... do you refuse to use the laptop supplied by your employer?
It does MITM between you and the HTTPS websites you browse.
It doesn't MITM anything. Do you see that as normal? Because I don't. We're adults here and I'm a tech guy, there's zero reason to control anything in my laptop.
In fact it's just a regular laptop that I fully control and installed from scratch, straight out of Apple's store. As all my company laptops have been.
And if it was company policy I would refuse indeed. I would probably not work there in the first place, huge red flag. If I really had to work there for very pressing reasons I would do zero personal browsing (which I don't do anyways).
Not even when I was an intern at random corpo my laptop was MITMed.
My work laptop has a CA from the organization installed and all HTTP(S) traffic is passed through a proxy server which filters all traffic and self-signs all domains with its' CA. It's relatively common for larger organizations. I've seen this in govt and banking.
To provide a European/Dutch perspective: I’m pretty sure that as a small employer myself, I am very much disallowed from using those mechanisms to actually inspect what employees are doing. Automated threat/virus scanning may be a legal gray zone, but monitoring-by-default is very much illegal, and there have been plenty of court cases about this. It is treated similarly to logging and reading all email, Slack messages, constantly screenrecording, or putting security cameras aimed at employees all day long. There may be exceptions for if specific fraud or abuse is suspected, but burden of proof is on the employer and just monitoring everyone is not justifiable even when working with sensitive data or goods.
So to echo a sister comment: while sadly it is common in some jurisdictions, it is definitely not normal.
I know it's common, but I don't think it's "normal" even if it has been "normalized". I wouldn't subject myself to that. If my employer doesn't trust me to act like an adult I don't think it's the place for me.
I could maybe understand it for non-tech people (virus scanning yadda yadda) but for a tech person it's a nuisance at best.
So you want to double the infrastructure surface to provide a different route for access for developers which may be a tiny portion of users in an organization? That's privilege right there.
Edit: I'm not saying I like it this way... but that's what you get when working in a small org in a larger org in a govt office. When I worked in a security team for a bank, we actually were on a separate domain and network. I generally prefer to work untrusted, externally and rely on another team for production deployment workflows, data, etc.
Indeed I'm quite privileged.
I'm lucky to be a dev both by trade and passion. I like my job, it's cozy, and we're still scarce enough that my employer and I are in a business relationship as equals: I'm just a business selling my services to another business under common terms (which in my case include trusting each other).
Using an employer-issued computer for anything but work for that employer is foolish, for a multitude of legal and other reasons. Privacy is just one of them.
This is still not that common but I used to work on a commercial web proxy that did exactly this. The only way it works is if the company pushes out a new root certificate via group policy (or something similar) so that the proxy can re-encrypt the data. Users can tell that this is being done by examining the certificate.
But this is mostly a waste of time, these days companies just install agents on each laptop to monitor activity. If you do not own the machine/network you are using then don’t visit sites hat you don’t want them to see.
>There are dozens of us I guess...
Shine on you crazy diamond, and all that, but...
> I have been continually confused as to why it is being forced down everyone's throat.
Have you never sat on public wifi and tried to open an http site? These days it is highly likely to be MITM'd by the wifi provider to inject ads (or worse). Even residential ISPs that one pays for cannot be trusted not to inject content, if given the opportunity, because they noticed that they are monopolies and most users cannot do anything about it.
You don't get to choose the threat model of those who visit your site.
Have you ever opened your work laptop? It is likely MITM'd so that your employer can see everything you read and post on the internet and HTTPS won't help you.
Have you never sat on public wifi and tried to open an http site? These days it is highly likely to be MITM'd by the wifi provider to inject ads (or worse).
I honestly don't remember a single case where that happened to me. Internet user since 1997.
Agreed. I think that the push to make everything HTTPS is completely unnecessary, and in fact counterproductive to security. By throwing scary warnings in front of users when there is no actual security threat, we teach users that the scary warnings don't matter and they just should click past them. Warning when a site doesn't use TLS is a clear cut case of crying wolf.
What would the alternative be? Not warn users when they're about to login to a website that's pretending to be their bank?
Doesn't that mean that technically, any node in the network between you and your reader can mutate the contents of the blog in-transit without anyone being the wiser (up to and including arbitrary JavaScript inline injection)?
Probably a low-threat security risk for a blog.
Yes, hotels were injecting ads on their free WiFi - https://news.ycombinator.com/item?id=3804608
ISPs have been known to do the same thing.
Devil's advocate, but maybe ISPs should all inject ads to make a point. They make money, and anyone using HTTP gets taught a free lesson on what MITM means
Before turning on the dude who thrives to keep the internet free, fix your corporate laptop that does MITM even for HTTPS connections.
I own a personal laptop?
I'd be happy if EU outlawed this instead of outlawing encryption.
But indeed, the ability to publish on my own outweights the risk of someone modding my content.
Most of us here read their news from work laptops, where the employer and their MiTM supplier are a much bigger threat even for HTTPS websites.
This puts the question into my brain, which I have never thought to pursue, of whether you could offer a self-signed cert that the user has to install for HTTPS.
Their client will complain loudly until and unless they install it, but then for those who care you could offer the best of both worlds.
Almost certainly more trouble than it's worth. G'ah, and me without any free time to pursue a weekend hobby project!
> for those who care you could offer the best of both worlds.
You're not really offering that because the first connection could've be intercepted.
For a blog, i think the bigger risk is pervasive surveilence - gov reads all the connections and puts you on a list if the thing you are reading has the wrong keyword in it.
That's a good point to make, IMHO
What is funny about HTTPS is that early arguments for its existence IIRC were often along the lines of protecting credit card numbers and personal information that needed to be sent during e-commerce
HTTPS may have delivered on this promise. Of course HTTPS is needed for e-commerce. But not all web use is commercial transactions
Today, it's unclear who or what^2 HTTPS is really protecting anymore
For example,
- web users' credit card numbers are widely available, sold on black markets to anyone; "data breaches" have become so common that few people ask why the information was being collected and stored in the first place nor do they seek recourse
- web users' personal information is routinely exfiltrated during web use that is not e-commerce, often to be used in association with advertising services; perhaps the third parties conducting this data collection do not want the traffic to be optionally inspected by web users or competitors in the ad services business
- web users' personal information is shared from one third party to another, e.g., to "data brokers", who operate in relative obscurity, working against the interests of the web users
All this despite "widespread use of encryption", at least for data in transit, where the encryption is generally managed by third parties
When the primary use of third-party mediated HTTPS is to protect data collection, telemetry, surveillance and ad services delivery,^1 it is difficult for me to accept that HTTPS as implemented is primarily for protecting web users. It may benefit some third parties financially, e.g., CA and domainname profiteers, and it may protect the operations of so-called "tech" companies though
Personal information and behavioral data are surreptitiously exfiltrated by so-called "tech" companies whilst the so-called "tech" company's "secrets", e.g., what data they collect, generally remain protected. The companies deal in information they do not own yet operate in secrecy from its owners, relentlessly defending against any requests for transparency
1. One frequent argument for the use of HTTPS put forth by HN commenters has been that it prevents injection of ads into web pages by ISPs. Yet the so-called "tech" companies are making a "business" out of essentially the same thing: injecting ads, e.g., via real-time auctions, into web pages. It appears to this reader that in this context HTTPS is protecting the "business" of the so-called "tech" companies from competition by ISPs. Some web users do not want _any_ ads, whether from ISPs or so-called "tech" companies
2. I monitor all HTTPS traffic over the networks I own using a local forward proxy. There is no plaintext HTTP traffic leaving the network unless I permit it for a specific website in the proxy config. The proxy forces all traffic over HTTPS
If HTTPS were optionally under user control, certainly I would be monitoring HTTPS traffic being automatically sent from own computers on own network to Google by Chrome, Android, YouTube and so on. As I would for all so-called "tech" companies doing data collection, surveillance and/or ad services as a "business"
Ideally one would be able to make an informed decision whether they want to send certain information to companies like Google. But as it stands, with the traffic sometimes being protected from inspection _by the computer owner_, through use of third party-mediated certificates, the computer owner is prevented from knowing what information is being sent
In own case, that traffic just gets blocked
While Google and friends are happy to push for https, it’s dramatically easier to scam people via ads or AI generated content. Claiming plain HTTP is scary seems like a straw man tbh
The threat model of HTTP isn't site owners, it's that anyone else can change the content and you can't tell that it didn't come from the original site.
It's not a strawman, it's a real attack that we've seen for decades.
The entire guidance of "don't connect to an open wireless AP"? That's because a malicious actor who controlled the AP could read and modify your HTTP traffic - inject ads, read your passwords, update the account number you requested your money be transferred to. The vast majority of that threat is gone if you're using HTTPS instead of HTTP.
I call this the quicksand theory of network security. The threat is real but the risk overstated by orders of magnitude.
Remember firesheep? [1] [2] Go to a cafe with open WiFi, fire up extension and click on user who you want to impersonate.
Now imagine if we still lived in a world like that. Someone visits UN meeting and the rest is your imagination.
[1] https://nordvpn.com/cybersecurity/glossary/firesheep/?srslti...
[2] https://en.wikipedia.org/wiki/Firesheep
Then perhaps the problem is open APs? There are still legitimate uses for HTTP including reading static content.
Say we all move to HTTPS but then let’s encrypt goes away, certificate authority corps merge, and then google decides they also want remote attestation for two way trust or whatever - the whole world becomes walled up into an iOS situation. Even a good idea is potentially very bad at the hands of unregulated corps (and this is not a hypothetical)
> There are still legitimate uses for HTTP including reading static content.
This can still be MITM'd. Maybe they can't drain your bank account by the nature of the content, but they can still lie or something. And that's not good.
Or more problematically, inject a bunch of ads that lead users on to scams.
It would be ideal if people only browsed from trusted networks, but telling people "don't do the convenient, useful, obvious thing" only goes so far. Hence the desire to secure connections from another angle.
> Then perhaps the problem is open APs?
The problem in the above was not actually caused by the AP being open, nor is it just limited to APs in the path between you and whatever you're trying to connect to on the internet. Another common example is ISPs which inject content banners into unencrypted pages (sometimes for billing/usage alerts, other times for ads). Again, this is just another example - you aren't going to whack-a-mole an answer to trusting everything a user might transit on the internet, that's how we came to HTTPS instead.
> There are still legitimate uses for HTTP including reading static content.
There are valid reasons to do a lot of things which don't end up making sense to support in the overall view.
> Say we all move to HTTPS but then let’s encrypt goes away, certificate authority corps merge, and then google decides they also want remote attestation for two way trust or whatever - the whole world becomes walled up into an iOS situation. Even a good idea is potentially very bad at the hands of unregulated corps (and this is not a hypothetical)
There are at least 2 other decent sized independent ACME operators at this point, but say all of the certificate authority corps merge but we planned ahead and kept HTTP support: our banking/payments, sites with passwords, sites with PII, medical sites, etc is in a stranglehold but someone's plain text blog post about it will be accessible without a warning message. Not exactly a great victory, we'll still need to solve the actual problem just as desperately at that point.
.
The biggest gripe I have with the way browsers go about this is they only half consider the private use cases, and you get stuck with the rough edges. E.g. here they call private addresses out to not get a warning, but my (fully in browser, single page) tech support dump reader can't work when opened as a file:/// because the browser built-in for calculating an HMAC (part of WebCrypto) is for secure contexts only, and file:/// doesn't qualify. Apart from being stupid because they aren't getting rid of JavaScript support on file:/// origins until they just get rid of file:/// completely and it just means I need a shim, it's also stupid because file:/// is no less a secure origin than localhost.
I'd like for every possible "unsecure" private use case to work, but I (and the majority of those who uses a browser) also has a conflicting want to connect to public websites securely. The options and impacts for these conflicting desires have to be weighed and thought through.
I understand why file:/// is limited in the files it can load, but yeah I have no idea why so many functions are gated off.
At least mongoose will serve stuff in 100KB.
The only challenge to https, as compared to http, is certificates. If not for certificates I could roll out a server with https absolutely anywhere in seconds including localhost and internal intranets.
On another note I would much prefer to skip https, as the default, and go straight to WSS (TLS WebSockets). WebSockets are superior to HTTP in absolutely every regard except that HTTP is session-less.
I distinctly remember trying to sign up for Pandora’s premium plan back in 2012 and their credit card form being served and processed over HTTP. I emailed them telling them that I wanted to give them my money if they would just fix the form. They never got back to me or fix it for several more years while I gave my money to Spotify. Back then HTTPS was NOT the norm and it was a battle to switch people to it. Yes it is annoying for internal networks and a few other things but it is necessary.
I remember even back in the early 2000s https for credit card forms was pretty common. Surprised a company like Pandora wasn't with it by thr 2010s.
There is likely zero chance the OP's recollection is remotely correct. Pandora went public in 2011 with 80 million users, the chances of a publicly listed company of this size taking payments over HTTP in 2012 are about as close to zero as can be. If nothing else, their payment processor would drop them as a customer.
I found this: https://textslashplain.com/2016/03/06/using-https-properly/ Seems like it at least partially corroborates OP's recollection!
In the early to mid 2000s I would believe this. But for a major e-commerce provider in 2012? That seems vanishing improbable.
PCI DSS is the data security standard required by credit card processors for you to be able to accept credit card payments online. Since version 1.0 came out in 2004, Requirement 4.1 has been there, requiring encrypted connections when transmitting card holder information.
There’s certainly was a time when you had two parts of a commerce website: one site all of the product stuff and catalogs and categories and descriptions which are all served over HTTP (www.shop.com) and then usually an entirely separate domain (secure.shop.com) where are the actual checkout process started that used SSL/TLS. This was due to the overhead of SSL in the early 2000s and the cost of certificates. This largely went away once Intel processors got hardware accelerated instructions for things like AES, certificates became more cost-effective, and then let’s encrypt made it simple.
Occasionally during the 2000s and 2010s you might see HTML form that were served over HTTP and the target was an HTTPS URL but even that was rare simply because it was a lot of work to make it that complex instead of having the checkout button just take you to an entirely different site
> What's worse, many plaintext HTTP connections today are entirely invisible to users, as HTTP sites may immediately redirect to HTTPS sites. That gives users no opportunity to see Chrome's "Not Secure" URL bar warnings after the risk has occurred, and no opportunity to keep themselves safe in the first place.
What is the risk exactly? A man-in-the-middle redirect to a malicious https site?
Yes, that. Could be to a lookalike domain name, for example.
HTTPS sadly offers no protection from that at all these days. At least in the past when something was HTTPS you knew that someone had to jump through some hoops and pay some real money to earn that padlock, but now any script kiddie can automate certificates for free for as many lookalike domains they want to.
It would be nice to see some way for browsers to indicate when a site has some extra validation so you could immediately see that your bank has a real certificate as is appropriate for a bank and not just Let's Encrypt. Yes, I can click the padlock icon to get that information, but it would be nice if there was some light warning for free certificates to make it more immediately obvious.
A MITM could replace the redirect with malicious content, as described in the blog.
This is to be honest a little unfortunate. While Https is very important, do we really need to verify that Blog X that I may read once a year is really who they say they are? For many sites it doesn't make a lot of sense but we are here due to human nature
The problem is not the site, but the network in the middle. On-path attackers typically don't care about which site they MITM in order to inject javascript e.g. to show ads, insert tracking tokens or hijack the browser for other purposes. The site is the vector, not the target.
Sounds like a great argument for keeping js disabled in my browser. Because "httpS://" does nothing whatever to sanitize the js that it delivers. And one perfectly legit site may pull in js from two dozen or more different servers. Zero of which are magically guaranteed to only deliver benevolent code.
Vs. `traceroute` suggests that would-be on-path attackers are up against a vastly smaller attack surface.
Ironically, this Google page itself fails to work without Javascript enabled!
Doesn't it already do this? I keep a domain or two on HTTP to force network-level auth flows (which don't always fire correctly when hitting HTTPS) and I've gotten warnings from Chrome about those sites every time for years... Only if I've been to the site recently does the warning not show up.
Right now it only shows a little bubble in the URL bar saying "Not Secure", I think. (So, that is a "warning", in a sense.) TFA is saying there will now be an interstitial if you attempt an HTTP connection.
HSTS might also interact with this, but I'd expect an HSTS site to just cause Chrome to go for HTTPS (and then that connection would either succeed or fail).
> to force network-level auth flows (which don't always fire correctly when hitting HTTPS)
The whole point of HTTPS is basically that these shouldn't work, essentially. Vendors need to stop implementing weird network-level auths by MitM'ing the connection, and DHCP has an option to signal to someone joining a network that they need to go to a URL to do authentication. These MitM-ers are a scourge, and often cause a litany of poor behavior in applications…
I don’t believe Android IPv6 stack supports dhcp, so won’t be much use there.
HTTPS url?
Chrome has shown the HTTP warning in Incognito mode for about a year, and has shown the warning if you're in Advanced Protection mode for about 2-3 years.
http://http.rip/ is useful for testing this sort of thing. I used to test with http://neverssl.com/ until they added HTTPS for some reason.
> I used to test with http://neverssl.com/ until they added HTTPS for some reason.
My first reaction was along the lines of "What? That can't possibly be right..."
After testing a bit, it looks like you can load https://neverssl.com but it'll just redirect you to a non-https subdomain. OTOH, if the initial load before redirecting is HTTPS then it shouldn't work on hotel wifi or whatever, so still seems like it defeats the purpose.
Huh.
neverssl added an HTTPS version for browsers that automatically connect to HTTPS when entering a domain name (like Chrome probably will after this change, eventually). The HTTPS version of the site uses Javascript to load a random http:// subdomain of neverssl.com so automatic HTTPS redirects are still defeated.
http.rip will probably show a "website unavailable" error at some point unless you manually type in the http:// prefix.
IANA's http://example.com still has a plain http version.
i use http://perdu.com
Prediction: Wifi captive portal vendors will not react to this until after 90% of their customerbase has their funding dry up.
It is incredibly common for public wifi captive portals to be built on a stack of hacks, some of which require the inspection of HTTP and DNS requests to function.
*Yes better tools exist, but they dont arent commonly used, and require Portal, WAP and Client support. Most vendors just tell people to turn new fancy shit off, disable HTTPS and proceed with HTTP.
To be fair, most people connecting to captive portal networks are more likely to be doing so on their phones, and I don't think IOS even allows non-Safari browsers for captive Wi-Fi login. I'm unsure how they'll fix this for Android though.
Apple does however you have to go out of your way to do it. Every time.
What are you talking about? You can easily build the captive portals by setting up a custom DNS server, and HTTPS has nothing to do with it! In fact, local networks have been doing this very thing for years now. Apple even supports Detecting this interception so the operating system can show a captive portal to the user. The OS maker gives network admins an official a way to enforce captive portals, and it’s not going away with https.
You can but many vendors have not yet adopted that
http://www.slackware.com/ is probably the biggest website I'm aware of that does not serve encrypted traffic[1]. but there are a few other legitimately useful resources that don't encrypt.
[1] (Except on the arm subdomain for some reason)
My first distro was Slackware. Good memories. The ARM subdomain looks drastically more maintained, posts from 2025.
Don't ever view source on slackware.com
Don't ever view source on slackware.com
Awwww, that's the stuff right there.
It's a much nicer source to read than those javascript generated doms with 1,200 classes per tag.
> Don't ever view source on slackware.com
That's very 90s looking HTML. Large swathes of blank spaces may also indicate they're rendered somehow. PHP? CGI?
Confusingly it also sets an akamai cookie, `ak_bmsc`. Seems a bit out of place.
> Don't ever view source on slackware.com
why ?
While this is great for end users, this just shows again what kind of monopoly Google has over the web by owning Chrome.
I work at a company that also happens to run a CDN and the sheer amount of layers Google forces everyone to put onto their stack, which was a very simple text based protocol, is mind boggling.
First there was simple TCP+HTTP. Then HTTPS came around, adding a lot of CPU load onto servers. Then they invented SPDY which became HTTP2, because websites exploded in asset use (mostly JS). Then they reinvented the layer 4 with QUIC (in-house first), which resulted in HTTP3. Now this.
Each of them adding more complexity and data framing onto, what used to be a simple message/file exchange protocol.
And you can not opt out, because customers put their websites into a website checker and want to see all green traffic lights.
Mmmm, great that and mandatory key rotation every 90 days, plus needing to get a cert from an approved CA, means just that more busy work to have an independent web presence.
I don't like people externalizing their security policy preferences. Yes this might be more secure for a class of use-cases, but I as a user should be allowed to decide my threat model. It's not like these initiatives really solve the risks posed by bad actors. We have so much compliance theater around email, and we still have exactly the same threats and issues as existed twenty years ago.
You understand that key rotation can and should be automated, right?
HTTPS doesn't have mandatory key rotation every 90 days. LetsEncrypt does for reasons that they document, but you can go elsewhere if you'd prefer.
> I as a user should be allowed to decide my threat model
Asking you if you want to proceed is allowing you to decide your threat model.
> We have so much compliance theater around email, and we still have exactly the same threats and issues as existed twenty years ago.
...and yet we have largely eliminated entire classes of issue on the web with the shift to HTTPS, to the point where asking users to opt-in to HTTP traffic is actually a practical option, raising the default security posture with minimal downside.
> HTTPS doesn't have mandatory key rotation every 90 days. LetsEncrypt does for reasons that they document, but you can go elsewhere if you'd prefer.
A lot of this discussion is about how the browsers define their security requirements on top of HTTPS/TLS/etc.
Such as what CAs they trust by default, and what’s the maximum lifetime of a certificate before they won’t trust it. I believe it is now 2 years? Going even lower soon.
Impressive. I don't need to post my opinion on this anymore - you did it so much better than I ever could.
I see this as pretty much only a positive thing.
This is what things should be like chatting programs and end-to-end encryption.
But in every case by the way, we kinda trust the makers of this software. They can easily ship backdoors to specific users. Same with crypto wallets etc.
What's worse, many plaintext HTTP connections today are entirely invisible to users, as HTTP sites may immediately redirect to HTTPS sites. That gives users no opportunity to see Chrome's "Not Secure" URL bar warnings after the risk has occurred, and no opportunity to keep themselves safe in the first place.
Two hosting providers I use only offer HTTP redirects (one being so bad it serves up a self signed cert on the redirect if you attempt HTTPS) so hopefully this kicks them into gear to offer proper secure redirects.
What defines private sites, I wonder – beyond "such as local IP addresses like 192.168.0.1, single-label hostnames, and shortlinks like intranet/"?
Non-unique hostnames, which are RFC 1918 space, single-label hostnames, and addresses assigned to mDNS (.local).
Single label hostnames had an issue where it’s hard to type them into a browser.
How to fix this?
Usually, completing the domain name by adding the final period will do the job. Instead of entering myprinter into the address bar, try myprinter. so your DNS server doesn't try to resolve myprinter, myprinter.domain, myprinter.domain.tld, and whatever other search domains have been configured. A real, fully-qualified domain ends in a period, though most tools will happily let you avoid that final period.
Alternatively, .local domains will work for mDNS-capable devices (and non-mDNS-capable devices if you like to risk things breaking randomly), and the .internal TLD has been reserved so .internal domains should also work for local addresses.
Add a /, e.g. `shortname/`
As good an idea as this is... I do hope that localhost/127.0.0.1 will be excluded for devs/testers.
> One year from now, with the release of Chrome 154 in October 2026...
Wait a minute, how do they know what version Chrome will be at a year from now?
https://chromestatus.com/roadmap
>Chrome 154 Stable next year (Oct 7, 2026)
Chrome has a set release schedule, shipping a new major release every four weeks.
https://chromium.googlesource.com/chromium/src/+/HEAD/docs/p...
Google has a time machine
Chrome moved to a time based release cadence.
Https really sucks for our intranet. Every little web app and service needs certificates and you can't use letsencrypt.
I'm sure there will be a setting flag to stop blocking http sites, or maybe even a domain exclusion which will let you set up your intranet to work on http.
Maybe everything .local will already be allowed.
You may not want to, but you can use public certs and URLs on your intranet. You can't necessarily do http-01 challenges, but DNS based challenges are feasible. There are also other ACME providers which will let you skip challenges for DCVd domains.
HTTPS is great. HTTPS without HTTP is terrible for many human person use cases. Pretending those use cases don't exist is anti-human. But for corporate person use cases HTTPS-only is more than fine, it's required. So they'll force it on us all in all contexts. But in our own personal setups we can chose to be the change we want to see in the world and run HTTP+HTTPS. Even if most of the web becomes an HTTPS-only ID-centric corporate wasteland it doesn't take that many people to make a real web. It existed before them and still does. There's more human's websites out there now then ever. It's just getting harder and harder to find and see using their search and browser defaults. It's not okay, but maybe this is finally a solution to eternal september and we can all just live peacefully on TCP/IP HTTP/1.1 HTTP+HTTPS with HTML while corporate persons diverge off into UDP-land with HTTP/3 HTTPS-only CA TLS only QUIC for delivering javascript applications.
Does this apply to requests made by JS or just page loads?
Good stuff.
Anyone have a good recipe for setting up an HTTPS for one-off experiments in localhost? I generally don't because there isn't much of a compromise story there, but it's always been a security weakness in how I do tests and if Chrome is going to start reminding me stridently I should probably bother to fix it.
How exactly are unencrypted localhost connections a security weakness? To intercept the data on a loopback connection you'd need a level of access where encryption wouldn't really add much privacy.
Chrome treats localhost as a secure origin (regardless of HTTPS) by default - don't overthink it.
Oh, groovy; if they keep doing that I'm all good, since I usually do one-off remote stuff by SSH tunnels anyway.
I haven't used it, but I think `mkcert` is the go to solution for this. [0]
[0]: https://github.com/FiloSottile/mkcert
> HTTPS adoption expressed as a percentage of main frame page loads
Why is Linux adoption at 80% when MacOS/Android/Windows are at 95%? Quite unexpected.
They mention it later in the article; if they drop connections to internal networks from the graph, Linux shoots up all the way to 97%.
The answer is probably that people that run Linux are far more likely to run a homelab intranet that isn't secured by HTTPS, because internal IP addresses and hostnames are a hassle to get certificates for. (Not to mention that it's slightly pointless on most intranets to use HTTPS.)
This is addressed in the article.
> If you exclude navigations to private sites, then the distribution becomes much tighter across platforms. In particular, Linux jumps from 84% HTTPS to nearly 97% HTTPS when limiting the analysis to public sites only.
Sounds like it's just because a large chunk of Linux usage is for web interfaces on the local machine or network, rather than everyday web browsing.
Speculation, but: there are probably quite a few Linux systems displaying internal dashboards over HTTP, with the page set to auto-refresh.
Tendency of Linux users to have local resources that lack TLS? phpmyadmin, netdata, duckdb ui, git-webui, whatever.
Silly question and one I should probably already know the answer to but never really got around to thinking through: are there practical concerns for not doing TLS in your home intranet?
It means that if someone has patched into your local network they can access anything in there, but they have to get in first, right? So how concerned should one be in these scenarios
(a) one has wifi with WPA2 enabled
(b) there's a Verizon-style router to the outside world but everything is wired on the house side?
Main reason is that it's hard to get certificates for intranets that all devices will properly trust.
Public CAs don't issue (free) certificates for internal hostnames and running your own CA has the drawback that Android doesn't allow you to "properly" use a personal CA without root, splitting it's CA list between the automatically trusted system CA list and the per-application opt-in user CA list. (It ought to be noted that Apple's personal CA installation method uses MDM, which is treated like a system CA list). There's also random/weird one-offs like how Firefox doesn't respect the system certificate store, so you need to import your CA certificate separately in Firefox.
The only real option without running into all those problems is to get a regular (sub)domain name and issue certificates for that, but that usually isn't free or easy. Not to mention that if you do the SSL flow "properly", you need to issue one certificate for each device, which leaks your entire intranet to the certificate transparency log (this is the problem with Tailscale's MagicDNS as a solution). Alternatively you need to issue a wildcard certificate for your domains, but that means that every device in your intranet can have a valid SSL certificate for any other domain name on your certificate.
If someone is in your LAN then you have bigger problems than them snooping on you while you talk to your fridge.
Like eBay? Slightly different https://nullsweep.com/why-is-this-website-port-scanning-me/
> get a regular (sub)domain name
You can get $2/yr domain names on weird TLDs like .site, .cam, .link, ...
> which leaks your entire intranet to the certificate transparency log
Not necessarily, you don't route the domain externally, and use offline DNS challenge/request to renew the certificate.
> You can get $2/yr domain names on weird TLDs like .site, .cam, .link, ...
You can, but as stated - that's not free (or easy). That's still yet another fee you have to pay for... which hurts adoption of HTTPS for intranets (not to mention it's not really an intranet if it's reliant on something entirely outside of that intranet.)
If LetsEncrypt charged 1$ to issue/renew a certificate, they wouldn't have made a dent in the public adoption of HTTPS certificates.
> Not necessarily, you don't route the domain externally, and use offline DNS challenge/request to renew the certificate.
I already mentioned that one, that's the wildcard method.
Perhaps you might worry about hostile IOT doodads snooping on things that arent their business or making insecure public webpages with UPNP. If it is just devices you truly control and you never expose an unhardened device, then a walled garden can be fine.
Also, if WPA2 ever becomes extremely broken. There was a period of 3-5 yrs where WEP was taking forever to die at the same time that https was taking forever to become commonplace and you could easily join networks and steal facebook credentials out of the air. If you lived in an apartment building and had an account get hacked between maybe 2008-2011, you were probably affected by this.
Everything that matters in your home intranet should already be password protected and firewalled.
Thanks God, i am not using google
I for one hate https. Some html5 apis like location do not work without it and you get big fat warnings if you don‘t use it.
From having to pay for it in the past to now having to set up lets-encrypt, certbot, https-ingresses!
God, half my hobbyist and raw non-helm kubernetes config is https related. https-ingress.yaml is gigantic!
Is this really the best devex we could come up with?
Security theatre is all it is. Protect us from petty thieves but let our employers and the gov MITM our comms.
> Security theatre is all it is. Protect us from petty thieves
Even picking the most dismissive wording you can, you contradict yourself.
We need to replace the DNS system with a blockchain-based alternative where people can own domains on-chain without renewal fees. The public key used to encrypt data would be shown alongside the IP addresses registered for that domain name (same record). The owner of the domain (an NFT) would be able to change their public encryption key at will on-chain. They would only pay a fee when they want to perform some write action; holding the domain and lookups would be free forever. No need to separate DNS from the certification and encryption layer. You know the private key, you own the domain. So much cleaner than the mess we have now.
If someone doesn't like it, they can stay behind on the old DNS system or they can launch a new blockchain with their own version of reality... It's retarded that we need to have one version of reality for the entire planet. If someone in China wants to own facebook.com, they should be allowed. Heck, it could be a separate silo per city. The age of copyright and trademark is over. I don't see AI companies distributing royalties to people who wrote its training set...
"cannot connect" is next ?