This post is based upon a Twitter thread that was originally published on December 2. 2018.
There is a story behind how Tektronix Smalltalk became branded as an AI language in 1984.
In the 1960s-70s, Tektronix Inc had grown to become an industry leading electronics competing head-to-head with Hewlett-Packard. In the early ’80s Tektronix was rapidly going digital and money was being poured into establishing a Computer Research Lab (CRL) within Tek Labs. Two early successful CRL projects was my effort to create a viable performance Smalltalk virtual machine that ran on Motorola 680xx family processors and Roger Bates/Tom Merrow’s effort to develop an Alto-like 680xx based workstation for use in the lab.
The workstation was called the Magnolia and eventually over 50 of the were built. One for everybody in the fully staffed CRL. Tom’s team ported Unix to Magnolia and started working on Unix window mangers. I got Smalltalk-80 up on it using my virtual machine implementation.
CRL was rapidly staffing up with newly hired PhD-level CS researchers and each of them got a Magnolia. They were confronted with the choice of programming in a multi-window but basically shell-level Unix environment or a graphically rich Smalltalk live dev environment. Most of them, including most of the AI group, chose to build their research prototypes using Smalltalk— particularly after a little evangelism from Ward Cunningham. Many cool projects were built and demonstrated to Tek executives at the annual Tek Labs research forums (internal “science fairs”) in ’81-’83.
During that time there was a lot of (well deserved) angst within Tek about its seeming inability to timely ship new products incorporating new technologies and addressing new markets. At the fall 1982 research forum Tom, myself, and Rick LeFaive, CRL’s director, (and perhaps Ward) sat down with some very senior Tek execs in front of a couple of Magnolias and ran through the latest demos. The parting words from the execs were: “We have to do something with this!”
Over the next couple months Tom Merrow and I developed the concept for a “low-cost” ($10k) Smalltalk workstation. Rebecca Wirfs-Brockhad been software lead of the recently successful 410x “low cost” graphics terminals and we thought we could leverage its mechanicals for our workstation. Over the first half of ’83 Roger Bates and Chip Schnarel prototyped a 68010-based processor and display that would fit inside a 4105 enclosure. It was code named “Pegasus”.
After much internal politics, in late summer of 1983 we got the go ahead to turn Pagasus into a product. An intrapreneurial “special products unit” (SPU) was formed to take Pegasus to market. The SPU management was largely the team that had initially done the 410x terminals.
So, finally we get to the AI part of the story. Mike Taylor was the marketing manager of the Pegasus SPU. One day in late August of ’83 I was chatting with Mike in a CRL corridor. He says something like: Smalltalk is very cool but to market it we have to tell people what they can use it for?
I initially muttered some words about exploratory programming, objects, software reuse, etc. Then I paused, as wheels turned in my mind. AI was in the news because of Japan’s Fifth Generation Computing Initiative and I had just seen an of issue Time magazine that included coverage of it. I thought: objects, symbolic computing, garbage collection, LISP and responded to Mike: Smalltalk is an AI language.
Mike said: What!?? You mean Pegasus is a $10K AI machine? That’s something I can sell!
Before I knew what happened the Pegasus SPU was rechristened as AIM (AI Machines) and we were trying to figure out how we were going to support Common Lisp and Prolog in addition to Smalltalk.
The Pegasus was announced as the Tektronix 4404 in August 1984 at that year’s AAAI conference. The first production units shipped in January 1985 at a list price of $14,950. Even at that price it was considered a bargain.
In my previous post, I introduced the concept of a Personal Digital Habitat (PDH) which I defined as: a federated multi-device information environment within which a person routinely dwells. If you haven’t read that post, you should do so before continuing.
That previous post focused on the experience of using a PDH. It established a vision of a new way to use and interact with our personal collections of computing devices. Hopefully it is an attractive vision. But, how can we get from where we are today to a world where we all have our own comfortable digital habitat?
A PDH provides a new computing experience for its inhabitant.1I intend to generally use “inhabitant” rather than “user” to refer to the owner/operator of a PDH. Historically, a new computing experience has resulted in the invention of new operating systems to support that experience—timesharing, GUI-based personal computing, touch-based mobile computing, cloud computing all required fundamental operating system reinvention. To fully support the PDH vision we will ultimately need to reinvent again and create operating systems that manage a federated multi-device PDH rather than a single computing device.
An OS is a complex layered collection of resource managers that control the use of the underlying hardware and services that provide common capabilities to application programs. Operating systems were originally developed to minimize waste of scarce expensive “computer time.” Generally, that is no longer a problem. Today it is more important to protect our digital assets and to minimize wasting scarce human attention.
Modern operating systems are seldom built up from scratch. More typically new operating systems evolve from existing ones2For example, Android was built upon Linux and iOS was built starting from the core of MacOS X. through the addition (and occasional removal) of resource managers and application service layers in support of new usage models. A PDH OS will likely be built by adding new layers upon an existing operating system.
You might imagine a group of developers starting a project today to create a PDH OS. Such an effort would almost certainly fail. The problem is that we don’t yet understand the functionality and inhabitant experience of a PDH and hence we don’t really know which OS resource managers and service layers need to be implemented.
Before we will know enough to build a PDH OS we need to experience building PDH applications. Is this a chicken or egg problem? Not really. A habitat-like experience can be defined and implemented by an individual application that supports multiple devices—but the application will need to provide its own support for the managers and services that it needs. It is by building such applications that we will begin to understand the requirements for a PDH OS.
Some developers are already doing something like this today as they build applications that are designed to be local-first or peer-to-peer based or that support collaboration/multi-user sync. Much of the technology applicable to those initiatives is also useful for building self-contained PDH applications.
If you are an application developer who finds the PDH concept intriguing, here is my recommendation. Don’t wait! Start designing your apps in a habitat-first manner and thinking of your users as app inhabitants. For your next application don’t just build another single device application that will be ported or reimplemented on various phone, tablet, desktop, and web platforms. Instead, start from the assumption that your application’s inhabitant will be simultaneously running it on multiple devices and that they deserve a habitat-like experience as they rapidly switch their attention among devices. Design that application experience, explore what technologies are available that you can leverage to provide it, and then implement it for the various types of platforms. Make the habitat-first approach your competitive advantage.
If you have comments or question tweet them mentioning @awbjs. I first starting talking about personal digital habitats in a twitter thread on March 22, 2021. That and subsequent twitter threads in March/April 2021 include interesting discussions of technical approaches to PDHs.
1
I intend to generally use “inhabitant” rather than “user” to refer to the owner/operator of a PDH.
2
For example, Android was built upon Linux and iOS was built starting from the core of MacOS X.
I simply want to think about all my “digital stuff” as things that are always there and always available. No matter where I am or which device I’m using.
… My attention should always be on “my stuff.” Different devices and different services should fade into the background.
In the eight years since I wrote that blog post not much has changed in how we think about and use our various devices. Each device is still a world unto itself. Sure, there are cloud applications and services that provide support for coordinating some of “my stuff” among devices. Collaborative applications and sync services are more common and more powerful—particularly if you restrict yourself to using devices from a single company’s ecosystem. But my various devices and their idiosyncratic differences have not “faded into the background.”
Why haven’t we done better? A big reason is conceptual inertia. It’s relatively easy for software developers to imagine and implement incremental improvement to the status quo. But before developers can create a new innovative system (or users can ask for one) they have to be able to envision it and have a vocabulary for talking about it. So, I’m going to coin a term, Personal Digital Habitat, for an alternative conceptual model for how we could integrate our personal digital devices. For now, I’ll abbreviate it as PDH because each for the individual words are important. However, if it catches on I suspect we will just say habitat, digihab, or just hab.
A Personal Digital Habitat is a federated multi-device information environment within which a person routinely dwells. It is associated with a personal identity and encompasses all the digital artifacts (information, data, applications, etc.) that the person owns or routinely accesses. A PDH overlays all of a person’s devices1The person’s general purpose computer-like devices, not the hundreds of special purpose devices such as “smart” light switches or appliances in the surrounding ambient computing environment. A PDH may mediate a person’s interaction with such ambient devices but such devices are not a federated part of the PDH. Should a smart watch be federated into a PDH? Yes, usually. How about a heart pacemaker? NO! and they will generally think about their digital artifacts in terms of common abstractions supported by the PDH rather than device- or silo-specific abstractions. But presentation and interaction techniques may vary to accommodate the physical characteristics of individual devices.
People will think of their PDH as the storage location of their data and other digital artifacts. They should not have to think about where among their devices the artifacts are physically stored. A PDH is responsible for making sure that artifacts are available from each of its federated devices when needed. As a digital repository, a PDH should be a “local-first software” system, meaning that it conforms to:
… a set of principles for software that enables both collaboration and ownership for users. Local-first ideals include the ability to work offline and collaborate across multiple devices, while also improving the security, privacy, long-term preservation, and user control of data.
There is one important difference between my concept of a PDH and what Kieppmann, et al describe. They talk about the need to “synchronize” a user’s data that may be stored on multiple devices and this will certainly be a fundamental (and hopefully transparent) service provided by a PDH. But they also talk at length about multi-user collaboration where simultaneous editing may occur. With every person having their own PDH, support for inter-PDH collaborative editing will certainly be important. But I think focusing on multi-user collaboration is a distraction from the personal nature of a PDH. The design of a PDH should optimize for intra-PDH activities. At any point in time, a person’s attention is generally focused on one thing. We will rarely make simultaneous edits to a single digital artifact from multiple devices federated into our PDHs. But we will rapidly shift our attention (and our editing behaviors) among our devices. Tracking intra-PDH attention shifts is likely to be useful for intra-PDH data synchronization. I’m intrigued with understanding how we might use explicit attention shifts such as touching a keyboard or picking up a tablet as clues about user intent.
So let’s make this all a little more concrete by stepping through a usage scenario. Assume that I’m sitting at my desk, in front of a large display2Possibly hardwired to a “desktop” computer.. Also sitting on the desk is a laptop, a tablet with a stylus, and a “phone” is in my pocket. These devices are all federated parts of my PDH. Among other things, this means that I have access to the same artifacts from all the devices and that I can frictionlessly switch among them.
Initially, I’m typing formatted text into an essay that is visible on the desktop display.
I pick up the tablet. The draft essay with the text I just typed is visible.
I use the stylus to drag apart two paragraphs creating a drawing area and then make a quick block diagram. As I draw on the tablet, the desktop display also updates with the diagram.
As I look at my drawing in context, I notice a repeated word in the preceding paragraph so I use a scratch-out gesture to delete the extra word. That word disappears from the desktop display.
I put down the tablet, shift my attention back to the large display, and use a mouse and keyboard to select items in the diagram and type labels for them. If I glance at the tablet, I will see the labels.
Shifting back to tablet, I use the stylus to add a couple of missing lines to the diagram.
Suddenly, the desk top speaker announces, “Time to walk to the bus stop, meeting with Erik at Starbucks in 30 minutes.” The announcement might have come from any of the devices, the PDH software is aware of the proximity of my devices and makes sure only one speaks the alert.
I put the laptop into my bag, check that I have my phone, and head to the bus stop.
I walk fast and when I get to the bus stop I see on my phone that the bus won’t arrive for 5 minutes.
From the PDH recent activities list on the phone I open the draft essay and reread what I recently wrote and attach a voice note to one of the paragraphs.
Latter, after the meeting, I stay at Starbucks for a while and use the laptop to continue working on my essay following the suggestions in the voice notes I made at the bus stop.
…and life goes on in my Personal Digital Habitat…
Personal Digital Habitat is an aspirational metaphor. Such metaphors have had an important role in the evolution of our computing systems. In the 1980s and 90s it was the metaphor of a virtual desktop with direct manipulation of icons corresponding to metaphorical digital artifacts that made personal computers usable by a majority of humanity. Since then we added additional metaphors such as clouds, webs, and stores that sell apps. But our systems still primarily work in terms of one person using one physical “computer” at a time—even though many of us, in a given day, frequently switch our attention among multiple computers. Personal Digital Habitat is a metaphor that can help us imagine how to unify all our computer-based personal devices and simplify our digital lives.
This essay was inspired by a twitter thread from March 22, 2021. The thread includes discussion of some technical aspects PDHs. Future blog posts may talk about the technology and how we might go about creating them.
1
The person’s general purpose computer-like devices, not the hundreds of special purpose devices such as “smart” light switches or appliances in the surrounding ambient computing environment. A PDH may mediate a person’s interaction with such ambient devices but such devices are not a federated part of the PDH. Should a smart watch be federated into a PDH? Yes, usually. How about a heart pacemaker? NO!
Tudor is the driving-force behind gtoolkit.com, a modern and innovative programming environment that builds upon and extends classic Smalltalk development concepts. Tudor is someone worth listening to, but his tweet was something that I really couldn’t agree with. I found myself more in agreement with Andrei Pacurariu’s reply.
I started to tweet a response but quickly found that my thoughts were too complex to make a good tweet stream. Instead, I wrote the following mini essay.
Concretely, software is just bits in electronic storage that control and/or are manipulated by processors. Abstractions are the building blocks that enable humans to design and build complex software systems out of bits. Abstractions are products of out minds—they allow us to assign meaning to clusters (some large, some small) of bits. They allow us to build software systems without thinking about billions of bits or how processors work.
We manifest some useful and generally simple abstractions (instructions, statements, functions, classes, modules, etc.) as “code” using other abstractions we call “languages.” Languages give us a common vocabulary for us to communicate about those abstract building blocks and to produce the corresponding bits. There are many useful tools that can and should be created to help us understand the code-level operation of a system.
But most systems we build today are too complex to be fully understood at the level of code. In designing them we must use higher-level abstractions to conceptualize, compose, and organize code. Abstract machines, frameworks, patterns, roles, stereotypes, heuristics, constraints, etc. are examples of such higher-level abstractions.
The languages we commonly use provide few, if any, mechanisms for directly identifying such higher-level abstractions. These abstractions may manifest as naming or other coding conventions but recognizing them as such depends upon a pre-existing shared understanding between the writer and readers of the code.
If such conventions have been adequately formalized and followed, a tool may be able to assist a human identify and display the usage of higher-level abstractions within a code base. But traditionally, such tools are rare and often unreliable. One problem is that abstractions can overlap. So we (or the tools) not only need to identify and classify abstractions but also identify the clustering and organization of abstractions. Sometimes this results in multiple views that can provide totally different conceptions of the operation of the code we are looking at.
Software designers/architects often use informal diagrams to capture their conceptualization of the structure and interactions of the higher-level abstractions of their designs. Such diagrams can serve as maps that help developers understand the territory of the code.
But developers are often skeptical of such diagrams. Experience (and folklore) has taught them that design diagrams likely do not accurately reflect the actual state of a code base. So, how do we familiarize ourselves with a new software code base that we wish to modify or extend? Most commonly we just try to reverse engineer its architecture/design by examining the code. We tell ourselves that the code provides the ground truth and that we or our tools can generate truthful design docs from the code because the code doesn’t lie. When humans read code we can easily miss or misinterpret the higher-level abstractions that are lurking among and above the code. It’s a real challenge to imagine a tool that can do better.
Lacking design documents or even a good sense of a system’s overall design we then often make local code changes without correctly understanding the design context within which that code operates. Such local changes may “work” to solve some immediate problem. But as they accumulate over long periods such changes erode the design integrity of the overall system. And they contribute to the invalidation of any pre-existing design documents and diagrams that may have at one point in time provided an accurate map of the system.
We definitely need better tools for integrating higher-level system architecture/design abstraction with code—or at least for integrating documentation. But don’t tell me that creating such documents are a waste of time or that any such documents should be ignored. I’ve spent too many decades trying to guess how a complex program was intended to work by trying to use the code to get into the mind of the original designers and developers. Any design documentation is useful documentation. It will probably need to be verified against the current state of the code but even when the documents are no longer the truth they often provide insights that help our understanding of the system. When I asked Rebecca Wirfs-Brock to review this little essay, she had an experience to relate:
I remember doing a code/architecture review for a company that built software to perform high reliability backups…and I was happy that the architectural documents were say 70% accurate as they gave me a good look into the mind of the architect who engineered these frameworks. And I didn’t think it worthwhile to update it to reflect truth. As it was a snapshot in time. Sometimes updating may be worthwhile, but as in archeological digs, you’ve gotta be careful about this.
Diagrams and other design documents probably won’t be up to date when somebody reads them in the future. But that’s not an excuse to not create them. If you care about the design integrity of you software system you need to provide future developers (including yourself) a map of your abstract design as you currently know it.
Gilad Bracha recently posted an article “Bits of History, Words of Advice” that talks about the incredible influence of Smalltalk but bemoans the fact that:
…today Smalltalk is relegated to a small niche of true believers. Whenever two or more Smalltalkers gather over drinks, the question is debated: Why?
(All block quotes are from Gilad’s article unless otherwise noted.)
Smalltalk actually had a surge of commercial popularity in the first half of the 1990s but that interest evaporated almost instantaneously in 1996. Most of the Gilad’s article consists of his speculations on why that happened. I agree with many of Gilad’s takes on this subject, but his involvement and perspective with Smalltalk started relatively late in the commercial Smalltalk lifecycle. I was there pretty much at the beginning so it seems appropriate to add some additional history and my own personal perspective. I’ll be directly responding to some of Gilad’s points, so you should probably read his post before continuing.
Let’s Start with Some History
Gilad worked on the Strongtalk implementation of Smalltalk that was developed in the mid-1990s. His post describes the world of Smalltalk as he saw it during that period. But to get a more complete understanding we will start with an overview of what had occurred over the previous twenty years.
1970-1979: Creation
Starting in the early 1970s the early Smalltalkers and other Xerox PARC researchers invented the concepts and mechanisms of Personal Computing, most of which are still dominant today. Guided by Alan Kay’s vision, Dan Ingalls along with Ted Kaehler, Adele Goldberg, Larry Tesler, and other members of the PARC Learning Research Group created Smalltalk as the software for the Dynabook, their aspirational model of a personal computer. Smalltalk wasn’t just a programming language—it was, using today’s terminology, a complete software application platform and development environment running on the bare metal of dedicated personal supercomputers. During this time, the Smalltalk language and system evolved through at least five major versions.
1980-1984: Dissemination and Frustration
In the 1970s, the outside world only saw hints and fleeting glimpses of was what was going on within the Learning Research Group. Their work influenced the design of the Xerox Star family of office machines and after Steve Jobs got a Smalltalk demo he hired away Larry Tesler to work on the Apple Lisa. But the LRG team (later called SCG for Software Concepts Group) wanted to directly expose their work (Smalltalk) to the world at large. They developed a version of their software, Smalltalk-80, that would be suitable for distribution outside of Xerox and wrote about it in a series of books and a special issue of the widely read Byte magazine. They also made a memory snapshot of their Smalltalk-80 implementation available to several companies that they thought had the hardware expertise to build the microcoded custom processors that they thought were needed to run Smalltalk. The hope was that the companies would design and sell Smalltalk-based computers similar to the one they were using at Xerox PARC.
But none of the collaborators did this. Instead of building microcoded Smalltalk processors they used economical mini-computers or commodity microprocessor-based systems and coded what were essentially simple software emulations of the Xerox super computer Smalltalk machines. The initial results were very disappointing. They could nominally run the Xerox provided Smalltalk memory image but at speeds 10–100 times slower than the machines being used at PARC. This was extremely frustrating as their implementations were too slow to do meaningful work with the Smalltalk platform. Most of the original external collaborators eventually gave up on Smalltalk.
But a few groups, such as me and my colleagues at Tektronix, L Peter Deutsch and Alan Schiffman at Xerox, and Dave Ungar at UC Berkley developed new techniques leading to drastically better Smalltalk-80 performance using commodity microprocessors with 32-bit architectures.
At the same time Jim Anderson, George Bosworth, and their colleagues, working exclusive from the Byte articles independently developed a new Smalltalk that could usefully run on the original IBM PC with its Intel 8088 processor.
1985-1989: Productization
By 1985, it was possible to actually use Smalltalk on affordable hardware and various groups went about releasing Smalltalk products. In January 1985 my group at Tektronix shipped the first of a series of “AI Workstations” that were specifically designed to support Smalltalk-80. At about the same time Digitalk, Anderson’s and Bosworth’s company, shipped Methods—their first IBM PC Smalltalk product. This was followed by their Smalltalk/V products, whose name suggested a linkage to Alan Kay’s Vivarium project. Servio Logic introduced GemStone, an object-oriented data base system based on Smalltalk. In 1987 most of the remaining Xerox PARC Smalltalkers, led by Adele Goldberg, spun-off into a startup company, ParcPlace Systems. Their initial focus was selling Smalltalk-80 systems for workstation class machines.
Dave Thomas’ Object Technology International (OTI) worked with Digitalk to create a Macintosh version of Smalltalk/V and then started to develop the Smalltalk technology that would eventually be used in IBM’s VisualAge products. Tektronix, working with OTI, started developing two new families of oscilloscopes that used embedded Smalltalk to run its user interface. Both OTI and Instantiations, a Tektronix spin-off, started working on tools to support team-scale development projects using Smalltalk.
1990-1995: Enterprise Smalltalk
As many software tools vendors discovered over that period, it takes too many $99 sales to support anything larger than a “lifestyle” business. In 1990, if you wanted to build a substantial software tools business you had to find the customers who spent substantial sums on such tools. Most tools vendors who stayed in business discovered that most of those customers were enterprise IT departments. Those organizations, used to being IBM mainframe customers, often had hundreds of in-house developers and were willing to spend several thousand dollars per initial developer seat plus substantial yearly support fees for an effective application development platform.
At that time, many enterprise IT organizations were trying to transition from mainframe driven “green-screen” applications to “fat clients” with modern Mac/Windows type GUI interfaces. The major Smalltalk vendors positioned their products as solutions to that transition problem. This was reflected in their product branding, ParcPlace Systems’ ObjectWorks was renamed as VisualWorks. Digitalk’s Smalltalk/V became VisualSmalltalk Enterprise, IBM used OTI’s embedded Smalltalk technologies as the basis of VisualAge. Smalltalk was even being positioned in the technical press and books as the potential successor to COBOL:
Dave Thomas’ 1995 article “Travels with Smalltalk” is an excellent survey of the then current and previous ten years of commercial Smalltalk activity. Around the time it was published, IBM consolidated its Smalltalk technology base by acquiring Dave’s OTI, and Digitalk merged with ParcPlace Systems. That merger was motivated by their mutual fear of IBM Smalltalk’s competitive power in the enterprise market.
1996-Present: Retreating to the Fringe
Over six months in 1996, Smalltalk’s place in the market changed from the enterprise darling COBOL replacement to yet another disappointing tool with a long tail of deployed applications that needed to be maintained.
The World Wide Web diverted enterprise attention away from fat clients and suddenly web thin clients were the hot new technology. At the same time Sun Microsystems’ massive marketing campaign for Java convinced enterprises that Java was the future for both web and standalone client applications. Even though Java never delivered on its client-side promises, the commercial Smalltalk vendors were unable to counter the Java hype cycle and development of new Smalltalk-based enterprise applications stopped. The merged ParcPlace-Digitalk rapidly declined and only survived for a couple more years. IBM quickly pivoted its development attention to Java and used the development environment expertise it had acquired from OTI to create Eclipse.
Niche companies assumed responsibility for support of the legacy enterprise Smalltalk customers with deployed applications. Cincom picked up support for VisualWorks and Visual Smalltalk Enterprise customers, while Instantiations took over supporting IBM’s VisualAge Smalltalk. Today, twenty five years later, there are still hundreds of deployed legacy enterprise Smalltalk applications.
Dan Ingalls at Apple in 1996 used an original Smalltalk-80 virtual image to bootstrap Squeak Smalltalk. Dan, Alan Kay, and their colleagues at Apple, Disney, and some universities used Squeak as a research platform, much as they had used the original Smalltalk at Xerox PARC. Squeak was released as open source software and has a community of users and contributors throughout the world. In 2008, Pharo was forked from Squeak, with the intent of being a streamlined, more industrialized, and less research focused version of Smalltalk. In addition, several other Smalltalks have been created over that last 25 years. Today Smalltalk still has a small, enthusiastic, and somewhat fragmented base of users
Why Didn’t Smalltalk Win?
Gilad in his article, lists four major areas of failure of execution by the Smalltalk community that led to its lack of broad long term adoption. Let’s look at each of them and see where I agree and disagree.
Lack of a Standard
But a standard for what? What is this “Smalltalk” thing that wasn’t standardized? As Gilad notes, there was a reasonable degree of de facto agreement on the core language syntax, semantics and even the kernel class protocols going back to at least 1988. But beyond that kernel, things were very different.
Each vendor had a slightly different version – not so much a different language, as a different platform.
And these platforms weren’t just slightly different. They were very different in terms of everything that made them a platform. The differences between the Smalltalk platforms were not like those that existed at the time between compilers like GCC and Microsoft’s C/C++. A more apt analog would be the differences between platforms like Windows, Solaris, various Linux distributions, and Mac OSX. A C program using the standard C library can be easily ported among all these platform but porting a highly interactive GUI C-based application usually required a major rewrite.
The competing Smalltalk platform products were just as different. It was easy enough to “file out” basic Smalltalk language code that used the common kernel classes and move it to another Smalltalk product. But just like the C-implemented platforms, porting a highly interactive GUI application among Smalltalk platforms required a major rewrite The core problem wasn’t the reflective manner in which Smalltalk programs were defined (although I strongly agree with Gilad that this is undesirable). The problem was that all the platform services that built upon the core language and kernel classes—the things that make a platform a platform—were different. By 1995, there were at three major and several niche Smalltalk platforms competing in the client application platform space. But by then there was already a client platform winner—it was Microsoft Windows.
Business Model
Smalltalk vendors had the quaint belief in the notion of “build a better mousetrap and the world will beat a path to your door”. Since they had built a vastly better mousetrap, they thought they might charge money for it.
…Indeed, some vendors charged not per-developer-seat, but per deployed instance of the software. Greedy algorithms are often suboptimal, and this approach was greedier and less optimal than most.
This may be an accurate description of the business model of ParcPlace Systems, but it isn’t a fair characterization of the other Smalltalk vendors. In particular, Digitalk started with a $99 Smalltalk product for IBM PCs and between 1985 and 1990 built a reasonable small business around sub $500 Smalltalk products for PCs and Macs. Then, with VC funding, it transitioned to a fairly standard enterprise focused $2000+ per seat plus professional services business model. Digitalk never had deployment fees. I don’t believe that IBM did either.
The enterprise business model worked well for the major Smalltalk vendors—but it became a trap. When you acquire such customers you have to respond to their needs. And their immediate needs were building those fat-client applications. I remember with dismay the day at Digitalk when I realized that our customers didn’t really care about Smalltalk, or object-based programming and design, or live programming, or any of the unique technologies Smalltalk brought to the table. They just wanted to replace those green-screens with something that looked like a Windows application. They bought our products because our (quite successful) enterprise sales group convinced them that Smalltalk was necessary to build such applications.
Supporting those customer expectations diverted engineering resources to visual designers and box & stick visual programming tools rather than important down-stream issues such as team collaborative development, versioning, and deployment. Yes, we knew about these issues and had plans to address them, but most resources went to the immediate customer GUI issues. So much so that ParcPlace and Digitalk were blindsided and totally unprepared to compete when their leading enterprise customers pivoted to web-based thin-clients.
Performance
Smalltalk execution performance had been a major issue in the 1980s. But by the mid 1990s every major commercial Smalltalk had a JIT-based virtual machine and a multi-generation garbage collector. When “Strongtalk applied Self’s technology to Smalltalk” it was already practical.
While those of us working on Smalltalk VMs loved to chase C++ performance our actual competition was PowerBuilder, Visual Basic, and occasionally Delphi. All the major Smalltalk VMs had much better execution performance than any of those. Microsoft once even made a bid to acquire Digitalk, even though they had no interest in Smalltalk. They just wanted to repurpose the Smalltalk/V VM technology to make Visual Basic faster.
But as Gilad points out, raw speed is seldom an issue. Particularly for the fat client UIs that were the focus of most commercial Smalltalk customers. Smalltalk VMs also had much better performance than the other dynamic languages that emerged and gained some popularity during the 1990s. Perl, Python, Ruby, PHP all had, and as far as I know still have, much poorer execution performance than 1995 Smalltalks running on comparable hardware.
Memory usage was a bigger issues. It was expensive for customers to have to double the memory in PCs to effectively run commercial Smalltalk. But Moore’s law quickly overcame that issue.
It’s also worth dwelling on the fact that raw speed is often much less relevant than people think. Java was introduced as a client technology (anyone remember applets?). The vision was programs running in web pages. Alas, Java was a terrible client technology. In contrast, even a Squeak interpreter, let alone Strongtalk, had much better start up times than Java, and better interactive response as well. It also had much smaller footprint. It was a much better basis for performant client software than Java. The implications are staggering.
Would Squeak (or any mid-1990s version of Smalltalk) have really fared better than Java in the browser? You can try it yourself by running a 1998 version of Squeak right now in your browser: https://squeak.js.org/demo/simple.html. Is this what web developers needed at that time?
Java’s problem as a web client was that it wanted to be it’s own platform. Java wasn’t well integrated into the HTML-based architecture of web browsers. Instead, Java treated the browser as simply another processor to host the Sun-controlled “write once, run [the same] everywhere ” Java application platform. It’s goal wasn’t to enhance native browser technology—it’s goal was to replace them.
Interaction with the Outside World
Because of their platform heritage, commercial Smalltalk products had similar problems to those Java had. There was an “impedance mismatch” between the Smalltalk way of doing things and the dominant client platforms such as Microsoft Windows. The non-Smalltalk-80 derived platforms (Digitalk and OTI/IBM) did a better job than ParcPlace Systems at smoothing that mismatch.
Windowing. Smalltalk was the birthplace of windowing. Ironically, Smalltalks continued to run on top of their own idiosyncratic window systems, locked inside a single OS window.
Strongtalk addressed this too; occasionally, so did others, but the main efforts remained focused on their own isolated world, graphically as in every other way.
This describes ParcPlace’s VisualWorks, but not Digitalk’s Visual Smalltalk or IBM’s VisualAge Smalltalk. Applications written for those Smalltalks used native platform windows and widgets, just like other languages. Each Smalltalk vendor supplied their own higher-level GUI frameworks. Those frameworks were different from each other and from frameworks used with other languages. Digitalk’s early Smalltalk products for MSDOS supplied their own windowing systems, but Digitalk’s Windows and OS/2 products always used the native windows and graphics.
Source control. The lack of a conventional syntax meant that Smalltalk code could not be managed with conventional source control systems. Instead, there were custom tools. Some were great – but they were very expensive.
Both IBM and Digitalk bundled source control systems into their enterprise Smalltalk products which were competitively priced with other enterprise development platforms. OTI/IBM’s Envy source control system successfully built upon Smalltalk’s traditional reflective syntax and stored code in a multiuser object database. OTI also sold Envy for ParcPlace’s VisualWorks but lacked the pricing advantage of bundling. Digitalk’s Team/V unobtrusively introduced a non-reflective syntax and versioned source code using RCS. Team/V could forward and backwards migrate versions of Smalltalk “modules” within an running virtual image.
Deployment. Smalltalk made it very difficult to deploy an application separate from the programming environment.
As Gilad describes, extracting an application from its development environment could be tricky. But both Team/V and Envy included tools to help developers do this. Team/V support the deployment of applications as Digitalk Smalltalk Link Libraries (SLLs) which were separable virtual image segments that could be dynamically loaded. Envy, which originally was designed to generate ROM-based embedded applications, had rich tools for tree-shaking a deployable application from a development image.
Gilad also mentioned that “unprotected IP” was a deployment concern that hindered Smalltalk acceptance. This is presumably because even if source code wasn’t deployed with an application it was quite easy to decompile a Smalltalk bytecoded method back into human readable source code. Indeed, potential enterprise customers would occasionally raise that as a concern. But, it was usually as a “what about” issue. Out of hundreds of enterprise customers we dealt with at Digitalk, I can’t recall a single instance where unprotected IP killed a deal. If it had been, we would have done something about this as we knew of various techniques that could be used to obfuscate Smalltalk code.
“So why didn’t Smalltalk take over the world?”
Smalltalk did something more important than take over the world—it define the shape of the world! Alan Kay’s Dynabook vision of personal computing didn’t come with a detailed design document describing all of its features and how to create it. To make the vision concrete, real artifacts had to be invented.
Smalltalk was the software foundation for “inventing the future” Dynabook. The Smalltalk programming language went through at least five major revisions at Xerox PARC and evolved into the software platform of the prototype Dynabook. Using the Smalltalk platform, the fundamental concepts of a personal graphic user interface were first explored and refined. Those initial concepts were widely copied and further refined in the market resulting in the systems we still use today. Smalltalk was also the vector by which the concepts of object-oriented programming and design were introduced to a world-wide programming community. Those ideas dominated for at least 25 years. From the perspective of its original purpose, Smalltalk was a phenomenal success.
But, I think Gilad’s question really is: Why didn’t the Smalltalk programming language become one of the most widely used languages? Why isn’t it in the top 10 of the Tiobe Index?
Gilad’s entire article is essentially about answering that question, and he summarizes his conclusions as:
With 20/20 hindsight, we can see that from the pointy-headed boss perspective, the Smalltalk value proposition was:
Pay a lot of money to be locked in to slow software that exposes your IP, looks weird on screen and cannot interact well with anything else; it is much easier to maintain and develop though!
On top of that, a fair amount of bad luck.
There are elements of truth in all of those observations, but I think they miss what really happened with the rise and fall of commercial Smalltalk:
Smalltalk wasn’t just a language; it was a complete personal computing platform. But by the time affordable personal computing hardware could run the Smalltalk platform, other less demanding platforms had already dominated the personal computing ecosystem.
Xerox Smalltalk was built for experimentation. It was never hardened for wide use or reliable application deployment.
Adapting Smalltalk for production use required large engineering teams and in the late 1980s no deep-pocketed large companies were willing to make that investment. Other than IBM and HP, the large computing companies that could have helped ready Smalltalk for production use took other paths.
Small companies that wanted to harden Smalltalk needed to find a business model that would support that large engineering effort. The obvious one was the enterprise application development market.
To break into that market required providing a solution to an important customer problem. The problem the companies latched on to was enterprise migration from green-screens to more modern looking fat-client applications.
Several companies had considerable success in taking Smalltalk to the enterprise market. But these were demanding and fickle customers and the Smalltalk companies became hyper focused on fulfilling their fat-client commitments.
With that focus, the Smalltalk companies were completely unprepared in 1996 to deal with the sudden emergence of the web browser platform and its use within enterprises. At the same time, Sun, a much larger company than any of the Smalltalk-technology driven companies spent vast sums to promote Java as the solution for both web and desktop client applications.
Enterprises diverted their development tool purchases from Smalltalk companies to new businesses who promised web and/or Java based solutions.
Smalltalk revenues rapidly declined and the Smalltalk focused companies failed.
Technologies that have failed in the marketplace seldom get revived.
Smalltalk still has its niche uses and devotees. Sometimes interesting new technologies emerge from that community. Gilad’s Newspeak is a quite interesting rethinking of the language layer of traditional Smalltalk.
Gilad mentioned Smalltalk’s bad luck. But I think it had better than average luck compared to most other attempts to industrialize innovative programming technologies. It takes incredible luck and timing to become of one of the world’s most widely used programming languages. It’s such a rare event that no language designer should start out with that expectation. You really should have some other motivation to justify your effort.
But my question for Newspeak and any similar efforts is: What is your goal? What drives the design? What impact do you want to have?
Smalltalk wasn’t created to rule the software world, it was created to enable the invention of a new software world. I want to hear about compelling visions for the next software world and what we will need to build it. Could Newspeak have such a role?
Our HOPL paper is done and submitted to the ACM for June 2020 publication in PACMPL (Proceedings of the ACM on Programming Languages) and presentation at the HOPL 4 conference in June 2021. PACMPL is an open access journal so there isn’t a paywall preventing people from reading our paper. The official version of the paper is at //dl.acm.org/doi/abs/10.1145/3386327. A version with minor corrections is accessible as //www.wirfs-brock.com/allen/jshopl.pdf or //zenodo.org/record/4960086. But before you run off and start reading this 190 page “paper” I want to talk a bit about HOPL.
The History of Programming Languages Conferences
HOPL is a unique conference and the foremost conference relating to the history of programming languages. HOPL-IV wll be only the 4th HOPL. Previous HOPLs occurred in 1978, 1993, and 2007. The History of HOPL web page provides an overview of the conference’s history and which languages were covered at each of the three previous HOPLs. HOPL papers can be quite long. As the HOPL-IV call for papers says, “Because of the complex nature of the history of programming languages, there is no upper bound on the length of submitted papers—authors should strive for completeness.” HOPL papers are often authored by the original designers of an important language or individuals who have made significant contributions to the evolution of a language.
As the HOPL-IV call for papers describes, writing a HOPL paper is an arduous multi-year process. Initial submissions were due in September 2018 and reviewed by the program committee. For papers that made it through that review, the second major review draft was due September 2019. The final “camera ready” manuscripts were due March 13, 2020. Along the way, each paper received extensive reviews from members of the program committee and each paper was closely monitored by one or more program committee “shepherds” who worked very closely with the authors. One of the challenges for most of the authors was to learn what it meant to write a history paper rather than a traditional technical paper. Authors were encouraged to learn to think and write like a professional historian.
I’ve long been a fan of HOPL and have read most of the papers from the first three HOPLs. But I’d never actually attended one. I first heard about HOPL-IV on July 7, 2017 when I received an invitation from Guy Steele and Richard Gabriel to serve on the program committee. I immediately checked whether PC members could submit and because the answer was yes, I accepted. I knew that JavaScript needed to be included in a HOPL and that I probably was best situated to write it. But my direct experience with JS only dates to 2007 so I knew I would need Brendan Eich’s input in order to cover the early history of the language and he agreed to sign-on as coauthor. My initial outline for the paper is dated July 20, 2017 and was titled “JavaScript: The First 25 Years” (we decided to cut it down to the first 20 years after the first round of reviews). The outline was seven pages long. I hadn’t looked at it since sometime in 2018 but looking at it today, I found it remarkably close to what is in the final paper. I knew the paper was going to be long. But I never thought it would end up at 190 pages. Many thanks to Richard Gabriel for repeatedly saying “don’t worry about the length.”
There is a lot I have to say about gathering primary source materials (like a real historian) but I’m going to save that for another post is a few days. So, if you’re interested in the history of JavaScript, start reading
Our HOPL paper is done—all 190 pages of it. The preprint will be posted this week. In the meantime, here’s a little teaser.
JavaScript: The First 20 Years
By Allen Wirfs-Brock and Brendan Eich
Introduction
In 2020, the World Wide Web is ubiquitous with over a billion websites accessible from billions of Web-connected devices. Each of those devices runs a Web browser or similar program which is able to process and display pages from those sites. The majority of those pages embed or load source code written in the JavaScript programming language. In 2020, JavaScript is arguably the world’s most broadly deployed programming language. According to a Stack Overflow [2018] survey it is used by 71.5% of professional developers making it the world’s most widely used programming language.
This paper primarily tells the story of the creation, design, and evolution of the JavaScript language over the period of 1995–2015. But the story is not only about the technical details of the language. It is also the story of how people and organizations competed and collaborated to shape the JavaScript language which dominates the Web of 2020.
This is a long and complicated story. To make it more approachable, this paper is divided into four major parts—each of which covers a major phase of JavaScript’s development and evolution. Between each of the parts there is a short interlude that provides context on how software developers were reacting to and using JavaScript.
In 1995, the Web and Web browsers were new technologies bursting onto the world, and Netscape Communications Corporation was leading Web browser development. JavaScript was initially designed and implemented in May 1995 at Netscape by Brendan Eich, one of the authors of this paper. It was intended to be a simple, easy to use, dynamic language that enabled snippets of code to be included in the definitions of Web pages. The code snippets were interpreted by a browser as it rendered the page, enabling the page to dynamically customize its presentation and respond to user interactions.
Part 1, The Origins of JavaScript, is about the creation and early evolution of JavaScript. It examines the motivations and trade-offs that went into the development of the first version of the JavaScript language at Netscape. Because of its name, JavaScript is often confused with the Java programming language. Part 1 explains the process of naming the language, the envisioned relationship between the two languages, and what happened instead. It includes an overview of the original features of the language and the design decisions that motivated them. Part 1 also traces the early evolution of the language through its first few years at Netscape and other companies.
A cornerstone of the Web is that it is based upon non-proprietary open technologies. Anybody should be able to create a Web page that can be hosted by a variety of Web servers from different vendors and accessed by a variety of browsers. A common specification facilitates interoperability among independent implementations. From its earliest days it was understood that JavaScript would need some form of standard specification. Within its first year Web developers were encountering interoperability issues between Netscape’s JavaScript and Microsoft’s reverse-engineered implementation. In 1996, the standardization process for JavaScript was begun under the auspices of the Ecma International standards organization. The first official standard specification for the language was issued in 1997 under the name “ECMAScript”
Two additional revised and enhanced editions, largely based upon Netscape’s evolution of the language, were issued by the end of 1999.
Part 2, Creating a Standard, examines how the JavaScript standardization effort was initiated, how the specifications were created, who contributed to the effort, and how decisions were made.
By the year 2000, JavaScript was widely used on the Web but Netscape was in rapid decline and Eich had moved on to other projects. Who would lead the evolution of JavaScript into the future? In the absence of either a corporate or
individual “Benevolent Dictator for Life,” the responsibility for evolving JavaScript fell upon the ECMAScript standards committee. This transfer of design responsibility did not go smoothly. There was a decade-long period of false starts, standardization hiatuses, and misdirected efforts as the ECMAScript committee tried to find its own path forward evolving the language. All the while, actual usage of JavaScript rapidly grew, often using implementation-specific extensions. This created a huge legacy of unmaintained JavaScript-dependent Web pages and revealed new interoperability issues. Web developers began to create complex client-side JavaScript Web applications and were asking for standardized language enhancements to support them.
Part 3, Failed Reformations, examines the unsuccessful attempts to revise the language, the resulting turmoil within the standards committee, and how that turmoil was ultimately resolved.
In 2008 the standards committee restored harmonious operations and was able to create a modestly enhanced edition of the standard that was published in 2009. With that success, the standards committee was finally ready to successfully undertake the task of compatibly modernizing the language. Over the course of seven years the committee developed major enhancements to the language and its specification. The result, known as ECMAScript 2015, is the foundation for the ongoing evolution of JavaScript. After completion of the 2015 release, the committee again modified its processes to enable faster incremental releases and now regularly completes revisions on a yearly schedule.
Part 4, Modernizing JavaScript, is the story of the people and processes that were used to create both the 2009 and 2015 editions of the ECMAScript standard. It covers the goals for each edition and how they addressed evolving needs of the JavaScript development community. This part examines the significant foundational changes made to the language in each edition and important new features that were added to the language.
Wherever possible, the source materials for this paper are contemporaneous primary documents. Fortunately, these exist in abundance. The authors have ensured that nearly all of the primary documents are freely and easily accessible on the Web from reliable archives using URLs included in the references. The primary document sources were supplemented with interviews and personal communications with some of the people who were directly involved in the story. Both authors were significant participants in many events covered by this paper. Their recollections are treated similarly to those of the third-party informants.
The complete twenty-year story of JavaScript is long and so is this paper. It involves hundreds of distinct events and dozens of individuals and organizations. Appendices A through E are provided to help the reader navigate these details. Appendices A and B provide annotated lists of the people and organizations that appear in the story. Appendix C is a glossary that includes terms which are unique to JavaScript or used with meanings that may be different from common usage within the computing community in 2020 or whose meaning might change or become unfamiliar for future readers.The first use within this paper of a glossary term is usually italicized and marked with a “g” superscript.’ Appendix D defines abbreviations that a reader will encounter. Appendix E contains four detailed timelines of events, one for each of the four parts of the paper.
Dave Winer recently blogged about his initial thoughts after dipping his toes into using some modern JavaScript features . He ends by suggesting that I might have some explanations and stories about the features he are using. I’ve given talks that cover some of this and normally I might just respond via some terse tweets. But Dave believes that blog posts should be responded to by blog posts so I’m taking a try at blogging back to him.
The JavaScript language is defined by a specification maintained by the Ecma International standards organization. Because of trademark issues, dating back to 1996, the specification could not use the name JavaScript. So they coined the name ECMAScript instead. Contrary to some myths, ECMAScript and JavaScript are not different languages. “ECMAScript” is simply the name used within the specification where it would really like to say “JavaScript”.
Standards organizations like to identify documents using numbers. The ECMAScript specification’s number is ECMA-262. Each time an update to the specification is approved as “the standard” a new edition of ECMA-262 is released. Editions are sequentially numbered. Dave said “ES6 is the newest version of JavaScript”. So, what is “ES6”? ES6 is colloquial shorthand for “ECMA-262, Edition 6”. ES6 was published as a standard in 2015. The actual title of the ES6 specification is ECMAScript 2015 Language Specification and the preferred shorthand name is ECMAScript 2015 or just ES2015.
So, why the year-based designation? The 6th edition of ECMA-262 took a long time to develop, arguably 15 years. As ES6 was approaching publication, TC39 (the Technical Committee within Ecma International that develops the ECMAScript specifications) already knew that it wanted to change its process in a way that enabled yearly maintenance updates. That meant a new edition of ECMA-262 every year with a new edition number. After a few years we would be talking about ES6, ES7, ES8, ES9, ES10, ES11, etc. Those numbers quickly loose any context for people who aren’t deeply involved in the standards development process. Who would know if the current standard ES7, or ES8, or ES9? Was some feature introduced in ES6 or ES7? TC39 couldn’t eliminate the actual edition numbers (standards organizations love their document numbers) but it could change the document title. We decide that TC39 would incorporate the year of release into the documents title and to encourage people to use the year when referring to a specific edition. So, the “newest version of JavaScript” is ECMA-262, Edition 8 and its title is ECMAScript 2017 Language Specification. Some people still refer to it as ES8, but the preferred shorthand name is ECMAScript 2017 or just ES2017.
But saying “ECMAScript” or mentioning specific ECMAScript editions is confusing to many people and probably is unnecessary for most situations. The common name of the language really is JavaScript and unless you are talking about the actual specification document you probably don’t need to utter “ECMAScript”. But you may need to distinguish between old versions of JavaScript and what is implemented by newer, modern implementations. The big change in the language and its specification occurred with ES2015. The subsequent editions make relatively small incremental extensions and corrections to what was standardized in 2015. So, here is my recommendation. Generally you should just say “JavaScript” meaning the language as it is used in browsers, Node.js, and other environments. If you need to specifically talk about JavaScript implementations that are based upon ECMAScript specifications published prior to ES2015 say “legacy JavaScript”. If you need to specifically talk about JavaScript that includes ES2015 (or later) features say “modern JavaScript”.
Except for modules almost all of ES2015-ES2017 is implemented in the current versions of all the major evergreen browsers (Chrome, Firefox, Safari, Edge). Also in current versions of Node.js. If you need to write code that will run on non-evergreen browsers such as IE you can use Babel to pre-compile modern JavaScript code into legacy JavaScript code.
Module support exists in all of the evergreen browsers, but some of them still require setting a flag to use it. Native ECMAScript module support will hopefully ship in Node.js in spring 2018. In the meantime @std/esm enables use of ECMAScript modules in current Node releases.
The main motivation for block scoped declarations was to eliminate the “closure in loop” bug hazard that may JavaScript programmer have encountered when they set event handlers within a loop. The problem is that var declarations look like they should be local to the loop body but in fact are hoisted to the top of the current function and hence each event handler defined in the loop use the last value assigned to such variables.
Replacing var with let gives each iteration of the loop a distinct variable binding. So each event handler captures different variables with the values that were current when the event handler was installed:
The hardest part about adding block scoped declaration to ECMAScript was coming up with a rational set of rules for how the declaration should interact with the already existing var declaration form. We could not change the semantics of var without breaking backwards compatibility, which is something we try to never do. But, we didn’t want to introduce new WTF surprises in programs that use both var and let. Here are the basic rules we eventually arrived at:
Most browsers, except for IE, had implemented const declarations (but without block scoping) starting in the early naughts. Firefox implemented block scoped let declaration (but not exactly the same semantics as ES2015) in 2006. By the time TC39 started serious working on what ultimately became ES2015, the keywords const and let had become ingrained in our minds such that we didn’t really consider any other alternatives. I regret that. In retrospect, I think we should have used let in place of const for declaring immutable variable bindings because that is the most common use case. In fact, I’m pretty sure that many developers use let instead of const for variable they don’t intend to change, simply because let has fewer characters to type. If we had used let in place of const then perhaps var would have been adequate for the relatively rare cases where a mutable variable binding is needed. A language with only let and var would have been simpler then what we ended up with using const, let, and var.
One of the primary motivations for arrow functions was to eliminate another JavaScript bug hazard. The “wrong this” problem that occurs when you capture a function expression (for example, as an event handler) but forget that this used inside the function expression will not be the same value as this in the context where you created the function expression. Conciseness was a consideration in the design of arrow functions, but fixing the “wrong this” problem was the real driver.
I’ve heard several JS programmers comment that at first they didn’t like arrow functions but that they grew upon them over time. Your mileage may vary. Here are a couple of good articles that address arrow function reluctance.
Actually, ES modules weren’t inspired by Node modules. But a lot of work went into making them feel familiar to people who were used to Node modules. In fact, ES modules are semantically more similar to the Pascal modules that Dave remembers then they are to Node modules. The big difference is that in the ES design (and Pascal modules) the interfaces between modules are statically defined while in the Node modules design module interfaces are dynamically defined. With static module interfaces the inter-dependencies between a set of modules are precisely defined by the source code prior to executing any code. With dynamic modules, the module interfaces cannot be fully understood without actually executing the code of the modules. Or stated another way, ES module interfaces are declaratively defined while Node module interfaces are imperatively defined. Static modules systems better support creation of ahead-of-time tools such as accurate module dependency linters or module linkers. Such tools for dynamic module interfaces usually depends upon applying heuristics that analyze modules as if they had static interfaces. Such analysis can be wrong if the actual dynamically interfaces construction does things that the heuristics didn’t account for.
The work on the ES module design actually started before the first release of Node. There were early proposals for dynamic module interfaces that are more like what Node adopted. But TC39 made an early decision that declarative static module interfaces were a better design, for the long term. There has been much controversy about this decision. Unfortunately, it has created issues for Node which have been difficult for them to resolve. If TC39 had anticipated the rapid adoption of Node and the long time it would take to finish “ES6” we might have taken the dynamic module interface path. I’m glad we didn’t and I think it is becoming clear that we made the right choice.
Strictly speaking, the legacy JavaScript language didn’t do async at all. It was host environments such as browsers and Node that defined the APIs that introduced async programming into JavaScript.
ES2015 needed to include promises because they were being rapidly adopted by the developer community (include by new browser APIs) and we wanted to avoid the problem of competing incompatible promise libraries or of a browser defined promise API that didn’t take other host environments into consideration.
The real benefit of ES2015 promises is that they provided a foundation for better async abstractions that do bury more of the BS within the runtime. Async functions, introduced in ES2017 are the “better way” to do async. In the pipeline for the near future is Async Iteration which further simplifies a common async use case.
Alan Kay famously said “The best way to predict the future is to invent it.” But how do we go about inventing a future that isn’t a simple linear extrapolation of the present?
Kay and his colleagues at Xerox PARC did exactly that over the course of the 1970s and early 1980s. They invented and prototyped the key concepts of the Personal Computing Era. Concepts that were then realized in commercial products over the subsequent two decades.
So, how was PARC so successful at “inventing the future”? Can that success be duplicated or perhaps applied at a smaller scale? I think it can. To see how, I decided to try to sketch out what happened at Xerox PARC as a pattern language.
Look Twenty Years Into the Future
If your time horizon is short you are doing product development or incremental research. That’s all right; it’s probably what most of us should be doing. But if you want to intentionally “invent the future” you need to choose a future sufficiently distant to allow time for your inventions to actually have an impact.
Extrapolate Technologies
What technologies will be available to us in twenty years? Start with the current and emerging technologies that already exist today. Which relevant technologies are likely to see exponential improvement over the next twenty years? What will they be like as they mature over that period? Assume that as the technical foundation for your future.
Focus on People
Think about how those technologies may affect people. What new human activities do they enable? Is there a human problem they may help solve? What role might those technologies have in everyday life? What could be the impact upon society as a whole?
Create a Vision
Based upon your technology and social extrapolations create a clearly articulated vision of what your desired future. It should be radically different form the present in some respects. If it isn’t, then invention won’t be required to achieve it.
A Team of Dreamers and Doers
Inventing a future requires a team with a mixture of skills. You need dreamers who are able to use their imagination to create and refine the core future vision. You also need doers who are able to take ill-defined dreams and turn them into realities using available technologies. You must have both and they must work closely together.
Prototype the Vision
Try to create a high fidelity functional approximation of your vision of the future. Use the best of today’s technology as stand-ins for your technology extrapolations. Remember what is expensive and bulky today may be cheap and tiny in your future. If the exact technical combination you need doesn’t exist today, build it.
Live Within the Prototype
It’s not enough to just build a prototype of your envisioned future. You have to use the prototype as the means for experiencing that future. What works? What doesn’t? Use you experience with the prototype to iteratively refine the vision and the prototypes.
Make It Useful to You
You’re a person who hopes to live in this future, so prototype things that will be useful to you. You will know you are on to something when your prototype become an indispensible part of your life. If it isn’t there yet, keep iterating until it is.
Amaze the World
If you are successful in applying these patterns you will invent things that are truly amazing. Show those inventions to the world. Demonstrate that your vision of the future is both compelling and achievable. Inspire other people to work towards that same future. Build products and businesses if that is your inclination, but remember that inventing the future takes more than a single organization or project. The ultimate measure of your success will be your actual impact on the future.
The first ten or fifteen years of a computing era is a period of chaotic experimentation. Early product concepts rapidly evolve via both incremental and disruptive innovations. Radical ideas are tried. Some succeed and some fail. Survival of the fittest prevails. By mid-era, new stable norms should be established. But we can’t predict the exact details.