Pop-up from Hell On the growing opacity of web programs
The 1990s, the web had a fair amount of quirky web pages, often created just for fun. The GeoCities hosting service, which has partly been archived is a witness of this and there are even academic books, such as Dot-Com Design documenting this history.
This blog post is based on a talk Popup from hell: Reflections on the most annoying 1990s program that I did recently at an (in person!) meeting of the PROGRAMme project. Thanks to everyone who attended for lively discussion and useful feedback!
Pop-ups and the Internet's original sin
As far as I can tell, the necessary technical components for the most annoying pop-up that I will
window.open and also
window.onbeforeunload from the very start. There was nothing evil
about opening new windows in your web applications. After all, many regular desktop applications
still open new windows when you open a detailed view or start writing a new email.
Figure 1. Pop-up advertisement from the 1990s
Figure 2. "Pop-up" advertisement from the 2020s
As told in The Internet's Original Sin,
the first use of
window.open for advertising was probably at the Tripod.com
web hosting site, which used it to disassociate the advertisements from the (potentially
not-safe-for-work) content hosted on the site by its users.
Since then, pop-ups became "one of the most hated tools in the advertiser's toolkit."
The early web allowed opening pop-ups whenever a web page wanted. You could successfully
window.open when a web page was loading. By changing the focus, the web page could
also make the pop-up appear underneath the main browser window, so that the user would only
see it once they close the browser. This technique is apparently known as pop-under
ads. Another annoying trick was to open the advertisement
when the user left a page, by handling the
window.onbeforeunload event that was triggered
when the user closed the window or when they navigated somewhere else.
In the early 2000s, web browsers started fighting back by blocking pop-ups. A simple option
was to block all
window.open calls and notify the user. More sophisticated blockers block
pop-ups only when loading or unloading, but allowed them, for example, in response to a button
Pop-up blockers did not stop the pop-up ad war for long. So called hover ads,
or in-page pop-ups, recreate the annoying pop-up experience by simulating a pop-up window
inside the page itself. A pop-up blocker cannot detect this, because the pop-up is not created
using some easily identifiable system functionality like
window.open, but using standard
In other words, we got from annoying pop-up ads in the 1990s, like the one in Figure 1, to equally annoying in-page pop-up ads in the 2020s, like the one in Figure 2.
Pop-ups and the quirky social web
If you were trying to outsmart your friends, the most obvious thing to try was to create a pop-up
window that cannot be closed, by opening exactly the same pop-up in the
Or, even better, open two windows!
Figure 3. Popup from Hell running in IE5 on Windows 98 virtual machine
The above is an actual implementation of the idea, running in Internet Explorer 5 in Windows 98
in Virtual Box (which you can get from Web Archive).
The code fits in one page. All you need to do is to write a function that opens a pop-up using
window.open, adds some
<marquee> to the newly opened window and then registers an event
handler for the
body.onbeforeunload event to open two new such windows. As a bonus, I also
created a fancy background using the all time favourite spray tool in Microsoft Paint!
Growing opacity of web programs
However, the way scripts and applications that run on the web are written has also changed:
From windows to in-page elements. As I wrote above, web pages nowadays rarely use system windows and instead re-create similar experience inside the web page. Pop-up ads are one obvious example, but the same thing happens with standard dialog windows;
window.alerthave all been replaced by Bootstrap's modal and similar.
<canvas>element makes it possible to increasingly avoid using not just system windows, but also browser DOM elements. One example of this is Google Docs, which are switching to canvas-based rendering.
What do we gain?
Many of the above developments are motivated by performance. Bundlers and minifiers make resulting files smaller to download; WebAssembly makes it possible to produce efficient compiled code for high-performance calculations; even the Google Docs move to canvas-based rendering is motivated by performance.
What do we lose?
A more interesting question is what do we lose by the growing opacity of programs on the web. One interesting issue is accessibility, because assistive technologies like screen readers rely on being able to analyse the structure (DOM) of the web page. In the Google Docs announcement, the authors later added an "update" explaining that they will ensure such tools will continue to work, but this requires writing extra code that instructs the tools, rather than just relying on the fact that they can see the structure of the page.
More generally, the more the structure of the web page is hidden from the system, the more we
lose the ability to build tools that somehow leverage the structure. If you view the source of
a web page in modern developer tools (Figure 4), you can see the structure and manually modify
it (a useful trick to read some paywalled content!) You can also write a quick script to extract
some data from the page (I did this when dealing with some of our student records!)
Browser extensions like Greasemonkey rely on
this and there is also active research on how the openness of the web could enable new
user experience. For an impressive example, see Geoffrey Litt's Wildcard project.
None of this will be possible when all web rendering moves to
Figure 4. Viewing a HTML table in Firefox Developer Tools
Editor war for the 21st century
Yet, there is one interesting difference, often attributed to (maybe not surprisingly if you read the above section) performance and extensibility. When you create an extension for the Atom editor, it gets a full access to the structure of the document. You can manually modify this (Figure 5) and you can create extensions that modify the structure in whatever way you want. This may go wrong (of course) but it also means that you can create quite powerful Atom-based tools. (I used this several years ago to create an interactive F# environment for data science.)
The side-effect of this design is that Atom extensions need to run in the same process as the main editor user interface, in order to be able to access the DOM. This is one of the reasons that make the Atom editor slower. (Reminding me of the "Eight megabytes and constantly swapping" joke about Emacs.) In Visual Studio Code, you can also manually look at the structure (as in Figure 5), but this cannot be done programmatically from an extension. Extensions are loaded as separate processes and they do not have full access to the document structure. This improves the performance, but it limits what extensions can do. They can only access the editor through the extension API. This includes all the expected extensibility points, such as adding support for new programming languages or creating new views, but it does not let you extend the editor in ways that the designers did not already think of. (At least, not without forking the editor itself.)
Figure 5. Manually editing the DOM in the Atom editor (and breaking things)
What is good for the User?
Many of the developments that make programs more opaque are presented as being good for the user. After all, who would not want a more efficient developer tooling or document processor? Yet, many of the developments that make programs opaque also take something from the users. In many of the discussions in the PROGRAMme project that I'm happy to be a part of, Liesbeth De Mol started using a distinction between a "user" and a "User" that I will follow here.
Figure 6. Disadvantages of disabling ad personalization
The User with upper-case U is an abstract persona constructed by corporations like Apple or Facebook that only needs what the companies provide. In contrast a lower-case user is a real human with interests not determined by the software they use. The transition from user to User follows a transition from hackers, who had not only their own interests, but were also often changing the software they used to suit their needs.
The growth of opacity is only a good thing for the User. It offers a more polished and efficient programs that perfectly address the needs of the User. In doing so, it leaves aside the needs of the lower-case u user and even more the needs of hackers who may actually want to modify the software they use to suit their needs. (In the case of pop-up blocking, the User defined by the software producers presumably wants to "engage with relevant advertisement opportunities" and see "more useful ads", Figure 6.)
Embedding of programs in a system
To make the idea of program opacity more precise, we can talk about the way in which a program is embedded in a system that hosts it. In case of the web, this is the relationship between the web application and a browser; in the editor war, it is the relationship between an extension and the editor. In some cases, the distinction is less exact. A Smalltalk program lives in the same image as the rest of the system, but we can still draw a line between the two.
The way a program utilizes the system in which it runs is a scale with two extreme ends that I'm going to call shallow and deep embedding (borrowing terms used in the functional programming language community for talking about domain-specific languages):
Figure 7. Smalltalk user interface
Shallow embedding. In this case, the program reuses as many features of the system as possible. Many aspects of the program are delegated to the system, meaning that the program may not have a full control over them (for example, the look of a user interface). Because of this, the system can see (to an extent) what a program works with and what is it trying to do. It can use this to implement assistive technologies or block undesirable program behaviour. (Stephen Kell's reflective Unix project can be seen as making Unix programs more shallowly embedded in the operating system.)
Deep embedding. Here, the program leverages only the minimum provided by the system and uses this to re-implement features (such as user interface elements) that are already provided by the system. One example is custom text rendering using
<canvas>when the browser already supports text rendering through HTML. Another example is on the boundary between Smalltalk runtime and the host operating system (rather than Smalltalk program running inside Smalltalk runtime), where the Smalltalk runtime does not leverage many of the operating system features.
Laws of program opacity evolution
In the case of the web, we can see an evolution from shallow embedding in the early days of the web to a deep embedding in the contemporary web technologies. In the case of the Atom vs. Visual Studio Code war, the latter forces extensions to use shallow embedding by restricting what the extensions can do (by offering only limited API) so that they have to rely on system features for most of what they do.
We can use this perspective to look at a number of other programs and systems. For example, the Apple AppStore attempts to prevent one particular kind of deep embedding through its terms and conditions when it forbids applications from using and downloading interpreted code (Apple does not go as far as it possibly could. In particular, it does let applications to control the look of their user interface, but you can easily imagine how that could become a requirement for "consistency reasons").
Is growth of opacity a law?
Lack of resources. Simply not having resources to re-invent the wheel means that programmers will likely reuse more of the features offered by the system, rather than trying to recreate functionality on their own.
Regulation. As already mentioned, Apple AppStore prevents certain forms of opacity through regulation in terms and conditions. This is another way of restricting program opacity.
When is opacity bad?
My second closing question is whether (the possibility of) the growth of opacity is a good or bad thing. It is certainly a characteristic of software that has a significant impact on the user (or the User). A more opaque system may be more efficient or may be able to achieve functionality that is otherwise impossible, but it is less open to modification and extensibility. I guess the question may be whether you as a user are more on the "side of system" or on the "side of the program".
On the side of the system. If I use a web browser to view a page that has some annoying pop-ups or a poorly sorted data table, I'm on the side of the system - I want to be able to use the system to control the web page, block pop-ups and extract data. In this case, shallow embedding allows me to do more.
On the side of the program. If I'm writing an application for iOS, I do not want to be told by Apple how to do things. If I want to download and interpret code (or create quirky user interface), I should be allowed to do that! In this case, deep embedding is the more desirable end of the spectrum.
As with my previous work looking at Commodore 64 BASIC, I think
there are interesting things to be learned from the history of the web. The growth of opacity
on the modern web has definitely allowed us to build more complex web applications, but it also
has its costs. Learnability and extensibility (if we all move to
<canvas> and WASM) are
two such things. Now, you could argue that this is not really true, because many components that
are involved in modern web page construction are open-source (or even hosted on NPM) and are
in fact easier to reuse. I do not think this is the case, because it widens the gap between
the programmer and the (upper-case) User.
Published: Friday, 8 October 2021, 1:14 PM
Author: Tomas Petricek
Typos: Send me a pull request!
Tags: academic, research, web, philosophy, talks