How much safer or more at risk is the average internet user now compared to 10 years ago?
If you read the news headlines, you might think we have gone from bad to worse, when we have in fact never been safer. Clearly, we haven’t “solved” security, yet it feels like we’ve checked a lot of items off the list.
To verify, Chester Wisniewski breaks down the advancements we’ve made to see if they are making a difference.
Chester Wisniewski (pictured) is a principal research scientist at next-generation security leader, Sophos.
Security in the early days
Today’s world wide web is a very different place from when it sprung from the mind of Sir Tim Berners-Lee in 1990. While the early web was free and open, it was a little too open. There was no privacy nor encryption to protect information moving between the numerous servers and routers involved in connecting the world.
Netscape helped solve this by introducing encryption through Secure Sockets Layer (SSL), later updated to a formal specification, Transport Layer Security (TLS). At the time, TLS was intended to secure your shopping cart, credit card information and occasionally your login ID and password.
Security by default
Strangely, this remained true all the way up until 2013, when an NSA contractor, Edward Snowden, decided to tell the world about how much online information the United States was gathering – and was able to gather – on almost everyone in the world.
Despite this, as late as October 2013, a few months after Snowden’s leaks, only 27.5 per cent of web pages loaded by Mozilla Firefox were using some form of encryption.
This prompted people in the security industry to take an interest and work to improve the security and privacy of internet users globally.
The thinking was: the only way to solve this problem is to encrypt everything and make it a requirement, not an afterthought. This spurred on the introduction of new technologies and standards to ensure that things were secure by default and to prevent things from being downgraded to use old insecure methods.
New technology and standards didn’t eliminate the risk though. If someone can meddle with your network connections, they can simply redirect to an imposter site to steal your private information. This is known as a machine-in-the-middle attack (MitM), which could be conducted by providing false DNS (Domain Name System) responses, operating an evil twin Wi-Fi access point, or directly by ISPs (Internet Service Providers), governments, law enforcement, and others. Companies can even intercept TLS traffic using middle boxes designed to inspect protected traffic.
Fixing the problem
Even if the site you’re visiting is using HTTPS, it likely must listen on insecure HTTP (HyperText Transfer Protocol) and redirect users to the secure site, as web browsers typically default to trying HTTP first.
To tell the browser to make the initial connection over HTTPS, in 2012 Google proposed a new HTTP header: HSTS (HTTP Strict Transport Security). This HTTP header allowed website administrators to indicate their website should only ever be loaded over HTTPS and that browsers should never attempt making connections over HTTP on port 80.
Of course, this still means you could be at risk of a downgrade to HTTP the very first time you visit a site, before your browser has observed the HSTS header. This is known as SSL stripping, which is the type of MitM attack HSTS was designed to cure in the first place.
To solve this problem, HSTS has been extended with a “preload” option.
Late in 2013, to encourage all sites to deploy TLS encryption, Google announced its Chrome browser would begin to warn people when accessing insecure web pages and it would rank unencrypted sites lower in its search results.
Because of Google’s policy and the security community as a whole pushing hard, we doubled the number of sites supporting secure connections in just three years. Google statistics now show that in most countries, sites visited by Chrome users are encrypted 95 per cent of the time.
The most recent move by browser vendors to push us into an always encrypted world began in November of 2020 when Mozilla introduced an HTTPS-only option to Firefox. When enabled, this feature attempts all connections securely over HTTPS and falls back to a warning if HTTPS is not available. Chrome followed by adding a similar option and turned it on by default in April 2021.
That’s fantastic progress, but aside from the high rates of encryption, are people deploying technologies like HSTS and is it used widely enough to help protect users on untrusted networks?
Bottom line
The web has never been safer.
With 95 per cent of web pages encrypted and those that aren’t mostly not presenting much risk, this is great news, especially during any of the busy online shopping seasons.
Bit by bit, the security community has worked together to improve standards, apply pressure on laggards and lower the costs of communicating securely over the internet. The amount of progress that has been made is impressive considering what the scale of the problem once was.
However, the job is by no means complete. With only 31.6 per cent of sites using HSTS, it shows that even features that are free and provide significant security improvements are not as widely deployed as they should be.
Securing the application layer has massive implications for users and safety. There’s still a risk the providers of the networks we use will spy on us, sell us to advertising networks or will be compromised by cybercriminals.
But, because of HSTS and TLS, you can pretty much browse and communicate as freely as you please with negligible risk of a bad outcome, even over untrusted Wi-Fi and mobile networks.