≡ Menu

Recently a friend of mine asked this question.  His theory was that the open web, as described in the Mozilla Mission no longer mattered because much of what we used to do using web browsers is now rapidly shifting to “apps”.  Why worry about the open web if nobody is going to be using it?

To me, this is really a question about what do we mean by “the web”. If by  “the web” we are just referring to the current worldwide collection of information made available by http servers and accessed most commonly using desktop browsers, then maybe he’s right.  While I use it all the time, I don’t think very much about the future of that web. Much about it will surely  change over the next decade. The 1995 era technologies do not necessarily need to be protected and nourished. They aren’t all good.

What I think about is the rapidly emerging pervasive and ambient information ecology that we are living within. This includes every digital device we regularly interact with. It includes devices that provide access to information but also devices that collect information. Some devices are “mobile”, others are build into physical infrastructure that surrounds us. It includes the sort of high production-value creative works that we see today “on the web” and still via pre-web media.  But it also includes, every trivial digital artifact that I create while going about my daily life and work.

Is this “the web”?   I’m perfectly happy to call it that.  It certainly encompasses the web, as we know it today.  But we need to be careful using that term to ensure that our thinking and actions aren’t over constrained by our perception of yesterday’s “web”.  This is why I like to tell people we are still in the very early stages of the next digital era.  I believe that the web we have today is, at most, the Apple-|| or TRS-80 of this new era. If we are going to continue to use  “the web” as a label then it needs to represent a 20+ year vision that transcends http and web browsers.

Technology generally evolves incrementally. Almost all of us spend almost all of our time working on things that are just “tactical” from the perspective of a twenty-year vision.  We are responding to what is happening today and working for achievement and advantage over the next 1-3 years. I think that the shift from “websites” to “apps” that my friend mentioned is just one of these tactical technology evolutionary vectors that is a point on the road to the future. The phenomena isn’t necessarily any more or less important than other point in time alternatives such as Flash vs. HTML or iOS vs. Android.  I think it would be a mistake to assume that “apps” is a fundamental shift.  We’ll know better in five years.

While everybody has to be tactical, a long-term vision still has a vital role.  A vision of a future that we yearn to achieve is an important influence upon our day-to-day tactical work. It’s the star that we steer by. A personal concern of mine is that we are severely lacking in this sort of long-term visions of  “the web”.  That’s why my plan for this year is to write more posts like “A Cloud on your Ceiling” that explore some of these longer term questions. I encourage you to also spend some time to think long term.  What short of digital enhanced world do you want to be living in twenty years from now?  What are you doing to help us get there?

(Photo by “mind_scratch”, Creative Commons Attribution License)

I’ve previously written that we are in the early stages of a new era of computing that I call “The Ambient Computing Era”.  If we are truly entering a new era then it is surely the case that the computers we will be using twenty or more years from now will exist in forms that are quite unlike the servers, desktop PCs, phones, and tablets we use today.  We can at best speculate or dream about what that world may be like.  But some of my recent readings about emerging technologies have inspired me to think about how things might evolve.

This week I learned about “WiGig” which is WiFi operating on 60GHz radio frequencies.  WiGig router chip sets already exist and support a theoretical throughput of 7Gbps.  The catch is that 60GHz radio wave won’t penetrate walls or furniture.  So if you want to have really high bandwidth wireless communications from something in your lap or on your sleeve to that wall-size display, you are probably going to want to hang a WiGig router on your ceiling. If your room is large or has a lot of furniture you may need several.

This got me thinking about what other sort of intelligent devices we may be hanging on our ceilings. The first thing that came to mind was LED lighting.  Until very recently, I was one of those people who would make jokes about assigning IP address to light bulbs.  But recently I was at a friend’s house where I saw exactly that: network addressable smart LED light bulbs.  It turns out that a little intelligence is actually useful in producing optimal room light with LEDs and when you have digital intelligence you really want to control it with something more sophisticated than a simple on/off switch.  So get ready for lighting with IP6 addresses. But they probably won’t be bulb shaped.

Both WiGig routers and networked LED room lighting are still too expensive for wide adoption, but like all solid state electronic devices we can expect their actual cost to approach zero over the next twenty or so years.  So there we have at least two kinds of intelligent devices that we probably will have hanging from our ceilings. But will they really be separate devices?  I could easily image a standardized ceiling panel, let’s say a half meter square consisting of LED lighting, a WiGig router, and other room electronics.  A standardized form factor would allow our homes and offices to be built (or updated) to include the infrastructure (power, external connectivity, physical mounting) that lets us easily service and evolve these devices. In honor of one of the most important web memes, I suggest that we call such a panel, a “CAT” or Ceiling Attached Technology.

So, what other functionality might be integrated on a CAT? Certainly we can expect sensors including cameras that allow the panel to “see” into the room. A 256 or 512 core computing cluster with several terabytes of local storage also seems very plausible.  Multiple CATs in the same or adjoining rooms would presumably participate in a mesh network that ultimately links to the rest of the digital world via high-speed wired or wireless “last mile” connections. Basically, our ceilings and walls could become what we think of today as “cloud” data centers.

What sort of computing would be taking place in those ceiling clouds? One possibility is that our entire digital footprint (applications, services, active digital archives) might migrate to and be cached in the CATs that are physically closest to us.  As we move about or from location to location, our digital footprint just follows us.  No need to make long latency round trips to massive data centers in eastern Oregon or contend for resources with millions of other active users.

Of course, there are tremendous technology challenges standing between what we have today and this vision.  How do we maintain the integrity of our digital assets as they follow us around from CAT to CAT? How do we keep them secure and maintain our personal privacy?  What programs get to migrate into our CATs?  How do we make sure it’s not malicious? How do we keep our homes from becoming massive botnets?  That’s why I think it’s important for some of us to start thinking about where this new computing era is heading and how we want to shape it. We can start inventing the ambient computing world just like Alan Kay and his colleagues at Xerox PARC started in the early 1970’s with the vague concept of a “Dynabook“ and went on to invent most of the foundational concepts that define personal computing.

If you find yourself thinking about “Post-PC Computing” keep in mind that the canonical computer twenty years from now will probably look nothing like a cell phone or tablet. It may look like a ceiling tile. I hope this warps your thinking.

(Photo by “Suicine”, Creative Commons Attribution License)

In my post, The Browser is a Transitional Technology, I wrote that I thought  web browsers were really Personal Computing Era applications and that browsers were unlikely to continue to exist as such as we move deeply into the Ambient Computing Era. However,  I expect browser technologies to have a key role in the Ambient Computing Era. In Why Mozilla, I talked about the inevitable emergence of a universal application platform for the Ambient Era and how open web technologies could serve that role. Last month I gave a talk where I tried to pull some of these ideas together:

For slides, 14-19 I talked about how when you remove that PC application facade from a modern browser you have essentially an open web-based application platform that is appropriate for all classes of ambient computing devices.

Today Mozilla announced an embryonic project that is directed towards that goal.    B2G or (Booting to the Web) is about showing that the open the web application platform can be the primarily platform for running native-grade applications.  As the project page says:

Mozilla believes that the web can displace proprietary, single-vendor stacks for application development. To make open web technologies a better basis for future applications on mobile and desktop alike, we need to keep pushing the envelope of the web to include — and in places exceed — the capabilities of the competing stacks in question.

One of the first steps is to directly boot devices into running Gecko, Mozilla’s core browser engine.  Essentially the devices will boot directly into the browser platform, but without the baggage and overhead of a traditional PC based web browser.  This is essentially the vision of slide 17 of my presentation.  The “G” in B2G comes from the use of Gecko, but the project is really about the open web. Any other set of browser technologies could potentially be used in the same way.  As the project web site says: “We aren’t trying to have these native-grade apps just run on Firefox, we’re trying to have them run on the web.”

This project is just starting, so nobody yet knows all the details or how successful it will be.  But, like all Mozilla projects it will take place in the open and with an open invitation for you involvement.

Recently I’ve had some conversations with some colleagues about how Web IDL is used to specify the APIs that browsers support for web applications.  I think our discussions raised some interesting questions about  the fundamental nature of the web app platform so I wanted to raise those same questions here.

Basically, is the browser web app platform an application framework or is it really  something that is more like an operating system? Stated more  concretely, is the web app platform most similar to the Java or .Net platforms or is it more similar to Linux or Windows?  In the long term this is probably a very important question.  It  makes a different  in the sort of capabilities that can be made available to a web app and also in the integrity expectations concerning  the underlying platform.

In a framework, client code directly integrates and extends the platform code. This allows client code to do very powerful things but the cost of this is that client code can do things that results in platform level errors or even failures. Modern frameworks are pretty much all defined in terms of object-oriented concepts because those concepts permits the client extensibility that is the primary motivation for building a framework. Frameworks generally have to trust their clients because they frequently have to pass control into client code and their is no way they can anticipate or validate everything client code might do. Frameworks are great from the perspective of what they allow developers to create, they are less great in turns of robustness and integrity.

In an operating system, client code almost never directly integrates with the platform code. Client code is limited to a fixed set of actions that can be requested via a fairly simple system call interface. In the absence of platform bugs, client code can’t cause platform level errors or crash the platforms because the platform carefully validates every aspect of every system call request and never directly executes untrusted client code. Operating systems don’t trust their clients. Successful operating system API are pretty much all expressed in terms of procedure calls that only accept scalars and simple structs as arguments because such arguments can be fully validated before the platform uses them to perform any action. Operating systems are great from a robustness and integrity perspective but they don’t offer much direct help to clients that need to do complex things.

Historically, there have been various attempts to create operating systems that uses framework style object-oriented client interfaces. All the major attempts at doing this that I am aware of have been dismal failures.  Taligent and Windows Longhorn are two notorious examples. The problem seems to be that the power and extensibility that comes with framework style interfaces is in direct conflict with robustness and integrity requirements of an OS. It is very difficult and perhaps impossible to find a comprise that provides sufficient power, extensibility, robustness, and integrity all at the same time. Systems like Taligent and Longhorn also have had significant durability issues because one of the ways they  tried to balance power and integrity was by describing their APIs in terms of static recursive object-oriented typing which are very hard to evolve in an backwards compatible fashion over multiple versions.

This begins to sound a lot like the way Web IDL is being used to describe web app APIs.  It has framework style APIs but browser implementers would like to have OS style system integrity and robustness.

One way OSes have addressed this issue is by using a  kernel.  The kernel is a small part of the overall platform that is very robust, has high integrity, and exposes very stable APIs.  The majority of the platform is outside the kernel.  In general, bugs or misuse of  non-kernel code may crash a application but it can’t crash the entire system. One way to think about large application frameworks like Java and .Net is that they are the low integrity but high leverage outer-most layer of such a kernelized design.

So what is the Web App platform. Is it a framework or is it an OS? I think it needs to be designed mostly like a framework.  However, there probably is a kernel of functionality that needs to be treated more like an OS.  That kernel is not yet well identified. It probably needs to be.  Otherwise, the designer of the web application platform run the risk of going down the same dead-end paths that were taken by the designers of “object-oriented” OSes like Taligent and Longhorn.

(Photo Attribution Some rights reserved by Pink Sherbet Photography)

In my last couple posts I introduced idea of using Mirrors for JavaScript reflection and took a first look at the introspection interfaces of my jsmirrors prototype. In this post I’m going to look at the other reflection interfaces in jsmirrors and how they are mixed together to provide various levels of reflection privilege.

When building this prototype I knew that I wanted to have a number of separable sets of reflection capabilities that I could mix and match in various ways. I also knew that the implementation was likely to change several times as I experimented with the prototype. I wanted to make sure that as I evolved the implementation that I could keep track of what belonged in each separable piece. The way I ultimately accomplished this was by maintaining a file of interface definitions that are separate from the actual code that implements jsmirrors. The interface specifications are contained in the file mirrorsInterfaceSpec.js. I look at the interface file when I need to remind myself how to use one of the specific reflection interfaces and as a specification as I make changes to the implementaiton. Also, whenever I perform a major refactoring of the implementation I check it against the interface specification. Here is the interface specification of the basic object introspection interface that I demonstrated in the Looking into Mirrors post:

//Mirror for introspect upon all objects
var objectMirrorInterface = extendsInterface(objectBasicMirrorInterface, {
   prototype:  getAccess(objectMirrorInterface|null),
     //return a mirror on the reflected object's [[Prototype]]
   extensible: getAccess(Boolean),
     //return true if the reflected object is extensible
   ownProperties: getAccess(array(propertyMirrorInterface)),
     //return an array containing property mirrors
     //on the reflected object's own properties
   ownPropertyNames: getAccess(array(String)),
     //return an array containing the string names
     //of the reflected object's own properties
   keys: getAccess(array(String)),
     //return an array containing the string names of the
     //reflected object's enumerable own properties
   enumerationOrder: getAccess(array(String)),
     //return an array containing the string names of the
     //reflected object's enumerable own and inherited properties
   prop: method({name:String}, returns(propertyMirrorInterface|undefined)),
     //return a mirror on an own property
   lookup: method({name:String},returns(propertyMirrorInterface|undefined)),
     //return mirror on the result of a property lookup. It may be inherited 
   has: method({name:String}, returns(Boolean)),
     //return true if the reflected object has a property named 'name'
   hasOwn: method({name:String}, returns(Boolean)),
     //return true if the reflected object has an own property named 'name'
   specialClass: getAccess(String)
    //return the value of the reflected object's [[Class]] internal property
});

I used JavaScript object literals and a few helper functions to describe these interfaces. Here is the definition of the helper functions used for this interface:

function getAccess(returnInterface) {}; //a "get-able" property
function method(arguments,returnInterface){}; // a method property
function extendsInterface(supers,members) {};//a interface adding to supers
function returns(returnInterface) {};   //return value of a method
function array(elementInterface) {};//array elements all support a interface

The JavaScript code of the interface definitions don’t actually do anything but I find that being able to parse the interface specification using JavaScript forces me to apply some useful structuring discipline that I might skip if I was just writing prose descriptions. Plus I think it is going to be quite useful to have these interface specifications in a form that is easily processed. For example, now that I have an initial implementation of jsmirrors, I may use it to create a little tool that can reflect upon the objects created by the interface specifications and perform useful tasks. For example I may generate unit test stubs for implementations of the interfaces. I may also use reflection over the interfaces to directly validate the completeness of my implementations.

In factoring the jsmirrors functionality for reflecting upon objects I divided it to three primary interfaces. objectMirrorInterface, shown above, is the basic introspection interface. objectMutationMirrorInterface allows changes to be made to a reflected object such as adding or removing properties or changing the object’s prototype. objectEvalMirrorInterface allows various forms of evaluation upon reflected objects such as doing “puts” and “gets” (which may invoke accessor property functions) to access property values of a reflected object or to invoke a method property. There are also corresponding introspection, mutation, and evaluation interfaces for function object mirrors and also for property mirrors.

In the actual implementation, these interfaces are combined in various ways to produce five different kinds of concrete mirrors on local objects. These various kinds of mirrors are accessible via factory functions that are accessed as properties of the Mirrors module object. The five local object mirror factories are:

  • Mirrors.introspect – supports only introspection using objectMirrorInterface.
  • Mirrors.evaluation – supports only evaluation using objectEvalMirrorInterface.
  • Mirrors.introspectEval – supports introspection and evaluation using objectMirrorInterface and objectEvalMirrorInterface.
  • Mirrors.mutate – supports introspection and mutation using objectMirrorInterface and objectMutationMirrorInterface.
  • Mirrors.fullLocal – supports introspection, mutation, and evaluation using all three interfaces.

I demonstrated the use of Mirrors.introspect is my previous post. The other Mirror factories are used in exactly the same manner and, except for Mirrors.evaluation, could be used to run all the same examples. However, the other factories expose additional functionality that isn’t available using Mirrors.introspect. Take a look at the actual interface specification in mirrorsInterfaceSpec.js to see which capabilities are provided by the mirror objects produced by each of these factories.

The reason for providing multiple mirror factories is to demonstrate that by using mirror-based reflection we can decide exactly how much reflection capability we will make available to any specific client or tool. We might allow one tool to use the full range of reflective interfaces. For another we may only expose introspection or evaluation capabilities or perhaps introspection and mutation capabilities without the ability to actually do reflective evaluation. However, so far, I’ve only shown mirrors that know how to reflect upon local objects that exist in the same heap as the mirror objects. In my next post I’ll look at how to use the same interfaces to reflect upon non-local objects that might be encoded in a file or exist in a remote environment.

(Photo by “Metro Centric”, Creative Commons Attribution License)

In my last post I introduced the programming language concept of Mirrors and mentioned jsmirrors, the prototype I’ve been working on to explore using mirrors to support reflection within JavaScript.  In this post I’m going to take a deeper look into jsmirrors itself.  I had three goals for my first iteration of jsmirrors:

  1. Define basic mirror-based interfaces for reflection upon upon JavaScript objects and properties.
  2. Demonstrate that jsmirrors  can support different levels of reflection privilege.
  3. Demonstrate that the jsmirrors interface can work with both local and external objects.

In this post I’m going to concentrate on showing details of the basic interfaces I designed to meet the first goal. In subsequent posts I talk about the other two goals.

The actual implementation of jsmirrors is contained in the file mirrors.js.  Note that jsmirrors requires an ECMAScript 5 compatible JavaScript implementation. The jsmirrors implementation is structured using the module pattern and when loaded defines a single global named Mirrors whose properties are factory functions that can be used to create various kinds of mirror objects. The most basic mirror factory is called introspect and creates a mirror on a local object that only supports introspection (examination without modification):

  
//create a test object
var obj = {a:1, get b() {return "b value"}, c: undefined};
obj.c = {back: obj};  //make a circular reference to obj

//create an introspection mirror on obj
var m=Mirrors.introspect(obj);
console.log(m);   //output:  "Object Introspection Mirror #0"

In the above example, lines 2-3 create a couple of test objects and line 6 is creating an introspection mirror on one of them. We see from the output of line 7 how such mirror objects identify themselves using the toString method. Once we have such a mirror, we can use it to examine the structure and state of its reflected object:

console.log(m.ownPropertyNames) ;  //output:  "a,b,c"
console.log(m.extensible); //output:  true
console.log(m.has("toString")); //output:  true
console.log(m.hasOwn("toString")); //output:  false
var p=m.prototype;
console.log(p); //output:  "Object Introspection Mirror #3"
console.log(p.hasOwn("toString")); //output:  true

Lines 8-11 are querying various characteristics of the object reflected by the mirror m such as a list of its own property names, whether or not additional properties may be added, and whether it locally defines or inherits a specific property. Line 12 queries for the object that is the prototype object for the reflected object. Note from line 13 that the value returned is also an introspection mirror. This is one of the important characteristics of this style of mirror interface. When an object value is accessed a mirror on the object is always returned rather than the actual object. You may be curious why the mirror p is “Mirror #3” rather than “Mirror #1”. The reason is that some of the preceding method calls internally generated Mirrors #1-2 as part of their internal implementation.

Mirror objects aren’t unique. Multiple mirror objects may simultaneously exist that reflect on the same underlying object. The sameAs method can be used to determine if two mirrors are reflecting the same object:

console.log(m.sameAs(p)) ;  //output:  false
var opm = Mirrors.introspect(Object.prototype);
console.log(p.sameAs(opm)); //output:  true

Introspection mirrors support several other methods. The complete list can be seen by looking at the objectMirrorInterface specification in mirrorsInterfaceSpec.js. Some of the most important methods provide access to information about specific properties. Property mirrors are returned to enable introspection of actual property definitions:

var pmb = m.lookup("b");
console.log(pmb); 
  //output: "Accessor Property Introspection Mirror name: b #6"

In line 18 the method lookup on a mirror object is used to retrive the property named “b”. What is return in this case is a property introspection mirror. The interface specifications propertyMirrorInterface, dataPropertyMirrorInteface, and accessorPropertyMirrorInteface in mirrorsInterfaceSpec.js describe the operations that can be performed on property introspection mirrors. For example:

console.log(pmb.isData);  //output: false
console.log(pmb.isAccessor); //output: true
console.log(pmb.enumerable); //output: true
Object.defineProperty(obj,"b",{enumerable: false});
console.log(pmb.enumerable); //output: false

Lines 21-22 show tests to determine whether the reflected property is a data property or an accessor property and line 23 reports the state of the property’s enumerable attribute. Lines 24-25 demonstrate that the mirror is presenting a live view of the reflected object. Line 24 modifies the enumerable attribute of the “b” property of the reflected object. When the mirror is again used in line 25 we see that the reported state of the enumerable attribute has changed to false. Note that we had to use a built-in reflection function to change the enumerable attribute because the mirrors we are using in the above examples only support introspection and don’t allow any changes to the reflected objects to be made using the mirrors.

console.log(pmb.definedOn.sameAs(m)); //output: true
var fm=pmb.getter;
console.log(fm);  //output: "Function Introspection Mirror #8"
console.log(fm.source); //output: "function () {return \"b value\";}"

Property mirrors know what object “owns” the reflected property. Line 26 shows using definedOn to get a mirror on the owning object. We then use sameAs to verify that this mirror is actually reflecting the same object as our original mirror m. Because the property we are reflecting upon is an accessor property it has getter and setter functions. In line 27 we use the property mirror to access the property’s getter function and in line 28 we see that the results is yet another kind of mirror, a “Function Introspection Mirror”. As specified by the functionMirrorInterface in mirrorsInterfaceSpec.js this is a kind of object mirror that adds reflection capabilities that are specific to function objects. For example, in line 29 we see that we can use the function mirror to retrieve the source code of the getter function.

The above examples provide just a quick overview of the capability of jsmirrors introspection mirrors and how they are used. But these mirrors only allow the inspection of objects. In many situations that is the only kind of reflection you need or that you will want to permit. However, there are situations where reflection needs to be able to perform other operations such as modifying the definitions of properties or calling reflected functions. In my next post, I’ll explore how jsmirrors supports those kinds of reflection and how it can be used to control or limit access to them.

A common capability of many dynamic languages, such as JavaScript, is the ability of a program to inspect and modify its own structure.  This capability is generally called reflection. Examples of reflective capabilities of JavaScript include things like the hasOwnProperty and isPrototypeOf methods. ECMAScript 5 extended the reflection capability to JavaScript via functions such as Object.defineProperty and Object.getOwnPropertyDescriptor. There are many reasons you might use reflection but two very common uses are for creating development/debugging tool and for meta-programming.

There are many different ways you might define a reflection API for a programming language. For example, in JavaScript hasOwnProperty is a method defined by Object.prototype so it is, in theory, available to be called as a method on all objects. But there is a problem with this approach. What happens if an application object defines its own method named hasOwnProperty? The application object definition will override the definition of hasOwnProperty that is normally inherited from Object.prototype. Unexpected results are likely to occur if such an object is passed to code that expects to do reflection using the built-in hasOwnProperty method. This is one of the reasons that the new reflection capabilities in ES5 are defined as functions on Object rather than as methods of Object.prototype.

Another issue that arises with many reflection APIs is that they typically only work with local objects. Consider a tool that gives application developers the ability to graphically browse and inspect the objects in an application. If such a tool is effective, developers might want to use it in other situations. For example, they might want to inspect the objects on a remote server-based JavaScript application or to inspect a diagnostic JSON dump of objects produced when an application crashed. If JavaScript’s existing reflection APIs were used to create the tool there is no direct way it can be used to inspect such objects because the JavaScript reflection APIs only operate upon local objects within the current program.

There is also a tension between the power of reflection and security concerns within applications. Many of the reflection capabilities that are most useful to tool builders and meta-programmers can also be exploited for malicious purposes. Reflection API designers sometimes exclude potentially useful features in order to eliminate the potential of such exploits.

Mirrors is the name of an approach to reflection API design that attempts to address many of the issue that have been encountered with various programming languages that support reflection. The basic idea of mirrors is that you never perform reflective operations directly upon application objects. Instead all such operations are performed upon distinct “mirror” objects that “reflect” the structure of corresponding application objects. For example, instead of coding something like:

if (someObj.hasOwnProperty('customer')) {...

you might accomplish the same thing via mirrors via something like:

if (Mirror.on(someObj).hasOwnProperty('customer')) {...

Mirrors don’t have the sort of issues I discussed above because when using them you never directly reflect on application objects. There is never any problem  if the application just happens to define a method that  has the same name as a reflection API method.  Because reflection-based tools only indirectly interact with the underlying objects via mirror objects, it is possible to create different mirrors that use a common interface to access either local object, remote objects, or static objects stored in a file. Similar, it is possible to have have mirrors that present a common interface but differ in terms how much reflection they allow.  A trusted tool might be given access to a mirror that supports the must power reflective operations while an untrusted plug-in might be restricted to using mirrors that support only a limited set of reflective operations.

Gilad Bracha and David Ungar are the authors of a paper that explain the principals behind mirror-based reflection: Mirrors: Design Principles for Meta-level Facilities of Object-Oriented Programming Languages. I highly recommend it if you are interested in the general topic of reflection.

Mirrors were originally developed for the self programming language, one of the languages that influenced the original design of JavaScript. Recently, I’ve been experimenting with defining a mirror based reflection interface for JavaScript.  An early prototype of this interface named jsmirrors is now up on github.  It uses a common interface to support reflection on both local JavaScript objects and on a JSON-based object encoding that could be used for remote or externally stored objects. It also supports three levels of reflection privilege.

In my next post I’ll explain more of the usage and design details of jsmirrors.  In the meantime, please feel free to take a look at the prototype.

(Photo by “dichohecho”, Creative Commons Attribution License)

Why Mozilla?

As somebody who is on record as believing that web browsers are a transitional technology, people occasionally ask me why I decided to go to work for a “browser company” like Mozilla. You can find a big part of the answer here:

As we move deeper into The Next Era of Computing there are still many questions about which technologies, organizations, and business models will define it. In every previous computing era and sub-era, a single proprietary “platform” emerged to dominate it. Will this happen again for the Ambient Computing era? A common platform is essential because it provides the foundation that everything else is built upon. This enables innovators to focus on creating their unique value rather than wasting most of their time recreating necessary infrastructure. It also enables, these technical innovations to be made ubiquitously available.

The current foundations of the emerging computing era are open web technologies. Can the standards-based open web maintain its role as the universal platform for this era? If so, it will need to continue to evolve and embrace innovation. Having just returned from a JavaScript standards meeting, I’m again reminded about how messy and slow consensus driven “standards” processes can be. Standards committees are not places where rapid innovation can or necessarily should occur. Proprietary platform vender have a real advantage in their ability to unilaterally make innovative choices about the evolution of their platform. However, those choices are always first and foremost driven by the business interest of the organizations and their shareholders.

If the standard’s based open web platform is going to continue to be the dominant platform for this era, its evolution needs to be driven by agile innovative organizations who are dedicated to its success. We need pragmatic organizations who are driven by the interests of computing users and not just their own dominance and profitability. Mozilla is such an organization. I think it has an essential role to play in advancing the next generation of computing technology and I’m really excited to be a part of it. so, I encourage everybody to find out more about Mozilla and how you can contribute.

I’ve previously written about the SPLASH Conference and why you might consider writing for it. Now, is the time to get serious as April 8, 2011 is the submission deadline for the major SPLASH conference tracks. If your aren’t familiar with SPLASH, here is how the SPLASH website describes itself:

Since 2010 SPLASH is the new umbrella conference for OOPSLA and Onward!. This year it features a third technical track, Wavefront, designed to publish innovative work closely related to advanced development and production software. SPLASH takes on the notable track record of OOPSLA as a premier forum for software innovation, while broadening the scope of the conference  into new topics beyond objects and new forms of contributions.

The overall theme of the conference is The Internet as the world-wide Virtual Machine.  This theme captures the change in the order of magnitude of computing that happened over the past few years. These days software systems are rarely designed in isolation; they connect to pieces written by 3rd parties, they communicate with other pieces over the Internet, they use big data produced elsewhere, they touch millions of interacting users through an ever larger variety of physical devices… in other words, the “machine” is now a global computing network. What does this entail for software development itself?

SPLASH’s mission is to engage software innovators from all walks of life in conversations about bettering software. This involves new ideas about programming languages, tools, conceptual models, and methodologies that can cope with, evolve, and leverage, the complex world-wide Virtual Machine that is emerging in front of our eyes. With the contributions of many volunteers, we are putting together another exciting program for next year. We look forward to your contributions.

The SPLASH Call for Papers describe the various tracks and their  submission procedures.  I’m the program chair of the new Wavefront track where we are  looking for submissions describing innovative real world (and particularly web-related) software innovations:

Wavefront seeks papers that describe original and innovative architecture, design, and/or implementation techniques used in actual leading-edge software system. Submissions from practicing software developers are strongly encouraged. Research or advanced development papers must address a problem of immediate concern for such systems and present immediately applicable results.

Our goal with Wavefront is to engage the software developers who are actually creating  next generation of software systems and to make sure  that their innovations are captured in the technical archives of computing.  So if you are creating such systems please consider submitting.  And please note that for Wavefront you don’t have to have your complete paper finished by April 8.  Wavefront is accepting 2-5 page extended abstracts and if your abstract is accepted we will  shepherd you through the process of creating a high quality technical paper. I highly recommend that you take advantage of this option if this is your first submission to a publication-oriented technical conference. Additional details can be found in the Wavefront Call for Papers.

I look forward to receiving your submissions.  If you have any questions about Wavefront please email me.

Allen Wirfs-Brock
2011 SPLASH/Wavefront Program Chair

My recent post on testing for negative 0 in JavaScript created a lot of interest.  So today, I’m going to talk about another bit of JavaScript obscurity that was also inspired by a Twitter thread.

I recently noticed this tweet go by:

This was obviously a trick question.  Presumably some programmer expected this expression to produce an array like [1, 2, 3] and it doesn’t. Why not? What does it actually produce?  I didn’t cheat by immediately typing it into a browser but I did quickly look up something in my copy of the ECMAScript 5 Specification. From the spec. it appeared clear that the answer would be:

[1, NaN, NaN]

I then typed the expression into a browser and that was exactly what I got.  Before I explain why, you may want to stop here and see if you can figure it out.

OK, here is the explanation. parseInt is  the built-in function that attempts to parse a string as a numeric literal and return the resulting number value.  So, a function call like:

var n = parseInt("123");

should assign the numeric value 123 to the local variable n.

You might also know, that if the string can’t be parsed as numeric literal, parseInt will return as the result the value NaNNaN, which is an abbreviation for “Not a Number”, is a value that generally indicates that some sort of numeric computation error has occurred.  So, a statement like:

var x = parseInt("xyz");

assigns NaN to x.

map is a built-in Array method that is in ECMAScript 5 and which has been available in many browsers for a while. map takes a function object as its argument.  It iterates, over each element of an array and calls the argument function once for each element, passing the element value as an argument. It accumulates the results of these function calls into a new array.  Consider this example,

[1,2,3].map(function (value) {return value+1})

it will return a new array [2,3,4]. It is probably most common to see a function expression such as this passed to map but it is perfectly valid to pass an already existing function object such parseInt.

So, knowing the basics of parseInt and map it is pretty clear that the original expression was intended to take an array of numeric strings and to return a corresponding array containing the numeric value of each string.  Why doesn’t it work?  To find the answer we will need to look more closely at the definition of both parseInt and map.

Looking at the specification of parseInt you should notice that it is defined as accepting two arguments.  The first argument is the string to be parsed and the second specifics the radix of the number to be parsed.  So, parseInt(“ffff”,16) will return 65535 while parseInt("ffff"”,8) will return NaN because "ffff" doesn’t parse as an octal number.  If the second argument is missing or 0 it defaults to 10 so parseInt("12",10)parseInt("12"), and parseInt("12",0) all produce the number 12.

Now look carefully at the specification of the map method.  It refers to the function that is passed as the first argument to map as the callbackfn.  The specification says, “the callbackfn is called with three arguments: the value of the element, the index of the element, and the object that is being traversed.” Read that carefully.  It means that rather than three calls to parseInt that look like:

parseInt("1")
parseInt("2")
parseInt("3")

we are actually going to have three calls that look like:

parseInt("1", 0, theArray)
parseInt("2", 1, theArray)
parseInt("3", 2, theArray)

where theArray is the original array ["1","2","3"].

JavaScript functions generally ignore extra arguments and parseInt only expects two arguments so we don’t have to worry about the effect of the theArray argument in these calls.  But what about the second argument?  In the first call the second argument is 0 which we know defaults the radix to 10 so parseInt("1",0) will return 1.  The second call passes 1 as the radix argument.  The specification is quite clear what happens in that case.  If the radix is non-zero and less than 2 the function returns NaN without even looking at the string.

The third call passes 2 as the radix argument.  This means that the string to convert is supposed to be a binary number consisting only of the digit characters "0" and "1". The parseInt specification (step 11) says it only tries to parse the substring to the left of the first character that is not a valid digit of the requested radix.  The first character of the string is "3" which is not a valid base 2 digit so the substring to parse is the empty string. Step 12 says that if the substring to parse is the empty string, the function returns NaN. So, the result of the three calls will be 1, NaN, and NaN.

The programmer of the original expression made at least one of two possible mistakes that caused this bug. The first possibility is that they either forgot or never knew that parseInt accepts an optional second argument.  The second possibility is that they forgot or never knew that map calls its callbackfn with three arguments. Most likely, it was a combination of both mistakes. The most common usage of parseInt passes only a single argument and most functions passed to map only use the first argument so it would be easy to forget that additional arguments are possible in both cases.

There is a straight forward way to rewrite the original expression to avoid the problem. Use:

["1","2","3"].map(function(value) {return parseInt(value)})

instead of:

["1","2","3"].map(parseInt)

This makes it clear that the callbackfn only cares about a single argument and it explicitly calls parseInt with only one argument.  However, as you can see it is much more verbose and arguably less elegant.

After I tweeted about this, there was an exchange about how JavaScript might be extended to avoid this problem or to at least make the fix less verbose.  Angus Croll (@angusTweets) suggested the problem could be avoided simply by using the Number constructor as the callbackfn instead of parseInt. Number called in this manner will also parse a string argument as a decimal number and it only looks at one argument.

@__DavidFlanagan suggested that perhaps a mapValues method should be added which only passes a single argument to the callbackfn.  However, ECMAScript 5 has seven distinct Array method that operate similarly to map, so we would really have to add seven such methods.

I suggest the possibility of adding a method that might be defined like:

Function.prototype.only = function(numberOfArgs) {
   var self = this; //the original function
   return function() {
      return self.apply(this,[].slice.call(arguments,0,numberOfArgs))
   }
};

This is a higher order function that takes a function as an argument and returns a new function that calls the original function but with an explicitly limited number of arguments.  Using only, the original expression could have been written as:

["1","2","3"].map(parseInt.only(1))

which is only slight more verbose and arguably retains a degree of elegance.

This led to a further discussion of curry functions (really partial function application) in JavaScript. Partial function application takes a function that requires a certain number of arguments and produces a new function that takes fewer arguments. My only method is an example of a function that performs partial function evaluation.  So is the Function.prototype.bind method that was added to ES5. Does JavaScript need such additional methods?  For example, a bindRight method that fixes the rightmost arguments rather than the leftmost.  Perhaps, but what does rightmost even mean when a variable number of arguments are allowed?  Probably bindStartingAt that took an argument position would be a better match to JavaScript.

However, all this discussion of extensions really misses the key issue with the original problem. In order to use any of them, you first have to be aware of the optional argument mismatch between map and parseInt. If you are aware of the problem there are many way to work around it.  If you aren’t aware then none of the proposed solutions help at all.  This really seems to be mostly an API design problem and raises some fundamental questions about the appropriate use of optional arguments in JavaSript.

Supporting optional arguments can simplify the design of an API by reducing the total number of API functions and by allowing many users to only have to think about the details of the most common use cases.  But as we see above, this simplification can cause problems when the functions are naively combined in unexpected ways.  What we are seeing in this example is that there are really two quite different use cases for optional arguments.

One use case looks at optional arguments from the perspective of the caller. The other use case is from the perspective of the callee.  In the case of parseInt, its design assumes that the caller knows that it is calling parseInt and has chosen actual argument values appropriately.  The second argument is optional from the perspective of the caller. If it wants to use the default radix it can ignore that argument.  However, the actual specification of parseInt carefully defines what it (the callee) will do when called with either one or two arguments and with various argument values.

The other use case is more from the perspective of a different kind of function caller. A caller that doesn’t know what function it is actually calling and that always passes a fixed sized set of arguments. The specification of map clearly defines that it will always pass three arguments to any callbackfn it is provided. Because the caller doesn’t actually know the identify of the callee or what actual information the callee will need, map passes all available information as arguments.  The assumption is that an actual callee will ignore any arguments that it doesn’t need.  In this use case the second and third arguments are optional from the perspective of the callee.

Both of these are valid optional argument use cases, but when we combine them we get a software “impedance mismatch”.  Callee optional arguments will seldom match with caller optional arguments. Higher order functions such as the bind or only methods can be used to fix such mismatches but are only useful if the programmer is aware that the mismatch exists. JavaScript API designers need to keep this mind and every JavaScript programmer needs to take extra care to understand what exactly will be passed to a function used as a “call back”.

Update 1: Correctly credit Angus Croll for map(Number) suggestion.