Thomas Bandt

Navigating The Complexities Of Browser-Based End-to-End Encryption: An Overview

This post aims to provide an overview of the complexities of implementing end-to-end encryption (E2EE) in modern web-based applications.

Published on Thursday, 29 February 2024

Technical Prerequisites

Although web-based software has been gaining importance since at least the end of the 1990s, web browsers, as the underlying "execution environments," long lacked elementary capabilities for implementing and executing cryptographic functions. This inevitably led to the emergence of proprietary implementations based on JavaScript, supported by all common browsers, such as the Stanford JavaScript Crypto Library (Cairns et al., 2016, p. 6; Halpin, 2014, p. 1; Stark et al., 2009, p. 1).

This development did not go without criticism, for example in the essay "Javascript Cryptography Considered Harmful" (Ptacek, 2011). Even the best cryptographic JavaScript libraries suffered and still suffer from fundamental problems, the causes of which lie outside their sphere of influence. For instance, there can be no guarantee that the libraries were not manipulated during transfer before execution in the browser (Cairns et al., 2016, p. 6; Halpin, 2014, p. 2). Additionally, the execution speed of some cryptographic functions is too slow for secure application, such as in the generation of random numbers (Cairns et al., 2016, p. 6).

In 2012, the World Wide Web Consortium (W3C) decided to establish a working group to develop a standard for integrating cryptographic functions into web browsers, which included all the relevant browser manufacturers at the time (including Apple, Google, Microsoft, Mozilla, and Opera) (Halpin, 2014, p. 1). The underlying idea was that essential cryptographic functions were already available through many browsers and the underlying operating systems and should be made accessible to web applications through a unified interface. The advantage was that a lot of effort and care had already been invested in the development of these existing functions regarding the ongoing review of their security – an effort from which web applications could directly benefit (Halpin, 2014, p. 2).

The result of this work was the Web Cryptography API, or WebCrypto API, which was available as of January 2017 and implemented by all relevant browsers (Can I use; W3C, 2017). With this standardized interface, it is possible to implement complex cryptographic protocols in web applications and rely on the browser's or operating system's own cryptographic functions without having to develop them independently. These include key generation, encryption and decryption, creating and verifying digital signatures, hashing, and random number generation (Halpin, 2014, p. 3).

Despite the assumption that the use of existing and already "hardened" functions from operating systems and browsers ensures a high level of security, accompanying security analyses were also conducted during the standard's development to uncover potential vulnerabilities early on. For instance, Cairns et al. were able to identify three possible attack vectors, two of which were immediately eliminated (Cairns et al., 2016, p. 26).

However, the Web Cryptography API only provides the basic cryptographic functions and does not offer protection against the problems that are inherently present and endanger web applications (Cairns et al., 2016, p. 6; Halpin, 2014, p. 2).

Trust in Browser Environments

The issues in the context of E2EE fundamentally stem from a lack of trust, which is tied to the unique operational characteristics of web applications and browsers, significantly different from traditional desktop applications and native mobile apps.

Desktop and mobile applications offer a range of features that can increase trust. For example, they must be signed by developers before release in app stores, assuring users that only entities authenticated to the platform operators have created a particular version of an application (Karthick & Binu, 2017, p. 688; C. Miller, 2011). Distribution through app stores also ensures that all users receive and run the same version of an application. Users may then decide whether to install it based on whether others consider it safe (Meier, 2021).

A web application, however, is subject to different conditions. With each page visit, individual components of that page, such as the HTML document, script files, and stylesheets, are loaded from a web server. The exact content of these application components can potentially change with each use. It's common practice in A/B testing to provide different versions of software to individual users or groups (Meier, 2021).

What may be advantageous from a product manager's perspective, however, has disadvantages in the context of an E2EE implementation. Currently, it is virtually impossible for users of such a web application using a standard browser to verify the application code. Only the correctness of the SSL/TLS certificate for securing the connection between client and server is checked by the browser. This only ensures that the data and application components have been delivered unaltered from the server from which they were requested. What exactly is hidden in them and ultimately executed, however, is not transparent to the users.

For instance, if a targeted attack against certain users were carried out, in which a manipulated variant of the web application is delivered, those users would not be able to verify it. Such a manipulated application could, for example, log user inputs before encryption and exfiltrate them unnoticed (Meier, 2021). This could be implemented directly by the developers and operators of the web application, or through targeted attacks by attackers.

While desktop and mobile apps can execute malicious code regardless of all security measures by platform operators, for instance through supply-chain attacks (Heinbockel et al., 2017), web applications used in web servers, content delivery networks, etc., add further potential attack vectors. Ultimately, a web application, retrieved from a web server and executed in a web browser, inherently faces a dilemma: the "leap of trust" users must give to a web application is inherently greater than that for a comparable desktop or mobile application:

“You can't tell your web server, as it controls what you see in your web browser, won't just make the web page transmit an unencrypted version of whatever message you are reading or authoring, somewhere you wouldn't want it to go. So the browser silently allows the server administrator to watch over your messaging. You MUST trust your server. It's inevitable.”

(SecuShare, 2013)

Security Objectives, Threat Modeling, and Best Practices

Assuming the trust placed by users, it remains a significant challenge to actually provide them with a secure application. This can be aided by articulating one's own security objectives, serious threat modeling, and identifying and adhering to best practices.

The articulation of security objectives for an E2EE implementation must be done realistically and humbly, especially given the limited trust model in the browser environment, to ensure the goals are indeed attainable.

For example, Kobeissi demonstrated that the end-to-end encryption of the ProtonMail web client is de facto not present, as the web server, which ProtonMail itself does not trust, delivered the cryptographic libraries used on the client-side (Kobeissi, 2021, p. 9). The goals must be adapted to existing conditions and communicated transparently. Hale and Komlos note, "[…] we define an application achieving any variant of end-to-endness as one that is A) clear and B) honest in its 1) description of the ends-set to users and 2) potential changes in the ends set over time […]" (Hale & Komlo, 2022, p. 14).

Besides defining the (non-)objectives, the creation and validation of a threat model are also important:

"Threat Modeling as a heuristic method allows the methodical examination of a system design or software architecture to identify, limit, and rectify security vulnerabilities cost-effectively and early - ideally in the design phase - in the software development process. However, Threat Modeling can also be successfully used in the verification phase or even later - after the release - for auditing the software. By early detection of security vulnerabilities, the cost of remediation can be reduced to one hundredth."

(Schwab et al., 2010)

For conducting threat modeling, there are a number of established methods available such as STRIDE (Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, Elevation of privilege), PASTA (Process for Attack Simulation and Threat Analysis) (Shevchenko et al., 2018), or attack trees (Schneier, 1999), as well as a selection of helpful software applications (Shi et al., 2022, p. 38).

In addition to the methodical-analytical approach of threat modeling, it is also advisable to look at problems that are repeatedly encountered in web-based software and for which best practices have been established. For this purpose, the Open Web Application Security Project (OWASP) is a suitable resource. OWASP, for example, offers the Application Security Verification Standard, a tool for designing and auditing web applications in terms of their security properties (van der Stock et al., 2021).


What do you think? Drop me a line and let me know!