≡ Menu

Update January 2018: In modern JavaScript engines that implement the ECMAScript 2015 specification the Object.is built-in function can be used to test for -0.

A few minutes ago I saw a tweet go by from my colleague, Dave Herman (@littlecalculist):


For background you need to know that JavaScript Number values have distinct representations for +0 and -0.  This is a characteristic of IEEE floating point numbers upon which the specification for JavaScript numbers is based.  However, in most situations, JavaScript treats +0 and -0 as equivalent values. Both -0==+0 and-0===+0 evaluate to true. So how, do you distinguish them if you really need to?  You have to find something in the language where they are treated differently.

Dave’s solution uses one such situation.  In JavaScript, division by zero of a non-zero finite number produces either +Infinity or -Infinity, depending upon the sign of the zero divisor. +Infinity and -Infinity can be distinguished using the == or === operators so that gives us a test for for -0.

Update: As Jeff Walden points out in a comment, Dave’s solution isn’t adequate because -0 isn’t the only divisor of 1 that produces -Infinity as its result. That problem can be solved changing the body of his function to:

return x===0 && (1/x)===-Infinity;

Are there any other ways to test for -0. As far as I know, until recently there wasn’t. However, ECMAScript 5 added a new way to make this test. Here is what I came up with as a solution:

function isNegative0(x) {
   if (x!==0) return false;
   var obj=Object.freeze({z:-0});
   try {
      Object.defineProperty(obj,'z',{value:x});
   } catch (e) {return false};
   return true;
}

If you are a specification junkie you may want to stop and take a few minutes to see if you can figure out how this works. For the rest of you (that care) here is the explanation:

Line 2 is taking care of all the values that aren’t either -0 or +0, for any other values !== will produce false and return. The rest of the function is about distinguishing the zero values. Line 3 creates an object with a single property whose value is -0. The object is frozen, which means its properties cannot be modified. Line 5 tries to use Object.defineProperty to set the value of the frozen object’s sole property to the value of x. Because the object is frozen you would expect this to fail (and throw an error) as it is an attempt to change the value of a frozen property. However, the specification of defineProperty says it only throws if the value (or other property attributes) that is being set is actually different from the current value. And “different” is defined in a manner that distinguishes -0 and +0. So if x is +0 this will be an attempt to change the value of the property and an exception is thrown. If x is -0 it isn’t a change and no exception is thrown. Line 6 catches the exception and returns false indicating that the argument isn’t -0. If defineProperty didn’t throw we we fall through to line 7 and return true indicating that the argument is -0. For the spec junkies, the key places that make this happen are [[DefineOwnProperty]] (section 8.12.9) and the SameValue algorithm (9.12).

I don’t think this satisfies Dave’s request for a more straight forward test, but it does illustrate a subtle feature of the ES5 specification.  But does it actually work?  This is  one of those obscure specification requirements that you could easily imagine being overlooked by a busy JavaScript engine developer.  I tested the above isNegative0 on pre-production versions of both FireFox 4 and Internet Explorer  9 and I was pleasantly surprised to find that they both worked for this test exactly as specified in  ES5. So congratulations to both the FF and IE developers for paying attention to the details.

(And thanks to Peter van der Zee, @kuvos, for suggesting this is worth capturing in a blog post)

In the graphic for my Third Era of Computing post I have two pairs of lines labeled “Transitional Technologies”. In my model, a transitional technology is a technology that emerges as a computing era settles into maturity and which is a precursor to the successor era. Transitional technologies are firmly rooted in the “old” era but also contain important elements of the “new” era. 

Transitional technologies that preceded the emergence of the Personal Computing Era included time-sharing and minicomputers.  Both of these technologies emerged as corporate computing matured and both technologies personalized, to a degree, human interaction with corporate computing resources.  Time-sharing allowed individuals to directly access a fractional share of large mainframe computing resources.  Minicomputers reduced the cost and complexity of computers to the point where they could be applied to departmental level problems and could be programmed or administered by individuals.

Time-sharing and minicomputers were significant steps towards the Personal Computing Era as they offered the first glimpses of what is possible when a computer is used to empower individuals. Most of the earliest personal computers physically resembled minicomputers and their operating systems were modeled after minicomputer OSes  and time-sharing user interfaces.  However, the true archetype of the modern personal computer anticipated something very different than a scaled down corporate computer.

My diagram shows cellphones and the “www” as two of the transitional technologies from the Personal Computing Era to the Ambient Computing Era. By “www” I meant both the concept of ubiquitous information access via public websites and the personal computer hosted browser applications used to access such information.  Both cellphones and the web established themselves as mainstream technologies in the 1990’s, just as personal computing was reaching maturity.  Both are personal and task-centric.  The browser itself is the epitome of a Personal Computing Era application program.

Both the cellphone and the www give us glimpse of things to come in the Ambient Computing Era and both establish some of the foundation technologies for that era.  But we shouldn’t expect the Ambient Computing Era as it matures to be just a refinement of cellphones and web browser any more than the Personal Computing Era was a only a refinement of time-sharing and minicomputers.

Right now, we seem to be in second golden age of browser innovation but that doesn’t mean that the browser, as we know it, will continue into the Ambient Computing Era.  Recall that Digital Equipment Corporation, the minicomputer company that grew into the world’s second largest computer company, hit the all-time high for its stock in 1987. That was the same year that Apple introduced the Mac II and Microsoft introduced Windows 2.0. Ten years latter the Personal Computing Era was firmly established but DEC and the minicomputer were no more.

For those of us to work on browser technologies that means it isn’t good enough to just create a great new PC-based web browser release every year or two (or even every 6 weeks, or every three months). We also have to aggressively work on the new technologies and user experiences that will make the web browser irrelevant.  Personally, I expect that many web technologies including HTML, CSS, and JavaScript are going to be foundational for the Ambient Computing Era but I don’t expect them to be packaged in a browser-like application running on running on a Windows or Mac PC. That transition has already started.

ECMASCript 5.1 SpecificationThis week I’ve been doing some analysis of the specification of object semantics in the current ECMAScript preparation.  This is in support of some new proposals and and  specifications that I’m writing for the next edition of ECMAScript. Some of this material may be useful for readers of the specification and in particular developers who are implementing the specification so I’m making it available here.  As I create or find other useful resources relating to the ECMAScript spec. I’ll post links to them at the same place.

Most of us spend most of our time working on immediate problems. Designing a new site, adding a feature to an app, revising a specification, etc. We all need to focus on these short-term problems but sometimes it is useful to step back and look at the larger context within which we are working.

Last fall at the Dynamic Language Symposium I gave a talk that concluded with this slide:

It abstracts some key points of my perspective on the history and near future of computing.  In this and some future posts I’ll be writing about some of those ideas. For readers with an analytic bent, don’t worry too much about what the y-axis represents.  This is a conceptual timeline, not a graph of any actual data.   The  y-axis was intended to be something like overall impact of computing upon average individuals but can also be seen as an abstraction of other relevant factors such as economic impact.

The must important idea from the slide is that, in my view, there have been three major “eras” of computing (or, if you prefer, of the “information age”). Each of these eras represents a major difference in the role computers play in human life and society.  The three eras also correspond to major shifts in the dominant form of computing devices and software. We are currently in the early days of the third era.

The first era was the Corporate Computing Era.  It was focused on using computers to enhance and empower large organizations such as commercial enterprises and governments.  Its applications were largely about collecting and processing large amounts of schematized data. Databases and transaction processing were key technologies.

During this era, if you “used a computer” it would have been in the context of such an organization.  However, the concept of “using a computer” is anachronistic to that era.  Very few individual had any direct contact with computing and for most of those that did, the contact was only via corporate information systems that supported some aspects of their jobs.

The Corporate Computing Era started with the earliest days of computing in the 1950’s and obviously corporate computing still is and will continue to be an important sector of computing.  However, around 1980 the primary focus of computing started to rapidly shift away from corporate computing.  This was the beginning of the Personal Computing Era.

The Personal Computing Era was about using computers to enhance and empower individuals.  Its applications were largely task-centric and focused on enabling individuals to create, display, manipulate, and communicate relatively unstructured information. Software applications such as word processors, spreadsheets, graphic editors, and email were key technologies.

Today we seem to be in the early stages of a new era of computing. A change to the dominant from of computing is occurring that will be at least a dramatic as the transition from the Corporate Computing Era to the Personal Computing Era. This new era of computing is about using computers to augment the environment within which humans live and work. It will be an era of smart devices, perpetual connectivity, ubiquitous information access, and computer augmented human intelligence.

We don’t yet have a universally accepted name for this new era.  Some people call it post-PC, pervasive,  or ubiquitous computing.  Others focus on specific technical aspects of the new era and call it cloud, mobile, or web computing.  The term that I currently prefer and will use for now is “ambient computing.”  In the Ambient Computing Era humans live in a rich environment of communicating computing devices and a ubiquitous cloud of computer mediated information.  In the Ambient Computing Era there will still be corporate computing and task-oriented personal computing style applications will still be used.  But the defining characteristic of this era will be the fact that computing is shaping the actual environment within which we live and work.

As I discussed in one of my first posts, a transitional period between eras is an exciting time to be involved in computing.  We all have our immediate goals and the much of the excitement and opportunity is focused on shorter term objectives.  But while we work to create the next great web application, browser feature, smart device, or commercially successful site or service we should occasionally step back and think about something bigger: What sort of ambient computing environment do we want to live within and is our current work helping or hindering its emergence?

SPLASH – Write to Share

One of my goals as member of the Mozilla research team is to encourage more public sharing of innovative ideas that contribute to the open web.  One way that this sharing can occur is via conference papers. My last post announced the call for papers for the ACM SPLASH conference Wavefront program.  I want to tell you a bit more about SPLASH/Wavefront and why your participation contributes to the development of the open web.

To start, I want to talk about different kinds of conferences.  The conferences you are probably most familiar are what I call  “selling conferences”.  (In all fairness, it would probably be equally accurate to call these “learning conferences” but I’m writing this from the perspective of a potential speaker rather than a non-speaker attendee.) In these conferences, the speakers are “experts” and generally have something (a product, a methodology, a programming language, a software tool, an interesting idea, etc.) that they are pitching. The audience is there to learn about what’s new and to choose new things to try (“buy”). There is a directed one-way information flow from the expert speakers to the audience. Many selling conferences are for-profit commercial ventures and often speakers are chosen on their ability to attract paying attendees.  A primary goal of many selling conference is to just get people to pay to attend.

Another kind of conference is what I call a “sharing conference”.  At these conferences, the speakers and audience members are generally professional peers. Everybody is there to share new ideas and to provide feedback on them. Speakers are chosen based upon the novelty and potential impart of the work they will speak about. Most sharing conferences are non-commercial and their primary goal is to advance the state-of-the art by sharing important new ideas and developments. The most significant sharing conferences are “conferences of record” where speakers are required to document their work in an archival report (a paper).  This enables the sharing to continue after the conference concludes and with those who cannot physically attend. The conference papers become a resource that may remain accessible and relevant for years after the actual conference.

Using these conference characterizations SPLASH is a “sharing conference of record”.  It is sponsored by the ACM, the world’s largest computing society, and SPLASH papers are archived in the ACM Digital Library.  SPLASH and conferences like it exist to foster the sharing of new ideas among computing innovators. This year, with the introduction of its Wavefront program, webish computing is becoming one of the focus areas for SPLASH.

There are relatively few conferences of this sort that are focused on webish technologies. In addition to SPLASH/Wavefront another is WebApps which coincidentally is also being held in Portland this year.

If you are pushing the state-of-the-art of webish computing and want your ideas to have an impact beyond your immediate projects you should consider submitting a paper to either or both of these conferences.  For most of us, shipping code is our top priority. But sharing our innovative ideas is also essential to the future of the open web.

The ACM SPLASH Conference has published the call for papers for a new SPLASH program component called Wavefront that is focused on what I call “webish” computing.  To quote the CfP:

The nature of computing is rapidly changing. Whether you label it ubiquitous, ambient, pervasive, social, mobile, web, cloud, or post-PC computing, it touches all aspects of human life. Wavefront papers are about the real systems, programming languages, and applications that are at the heart of this transition.

The Wavefront program is looking for submissions about real systems from working software developers, not only academic researchers.  It encourages submissions from authors who have not previously published conference papers and has a shepherding process to help such authors prepare their final paper.

I’m the SPLASH/Wavefront program chair and I encourage anyone who is innovating in this area to start working on a submission.  The submission deadline is April 8, 2011.  SPLASH 2011 will be October 22-27, 2011 in Portland, Oregon. See the CfP for more details.

Over the next few days I will have additional background posts about SPLASH/Wavefront and why you should consider making a submission and/or consider attending.

At the core of the JavaScript language is its “object model”. An object model defines the object abstraction of an language.  It tells users how to think about objects in a language — how are objects composed and what they can do.  It also tells language implementers what they must manifest as an object to users of the language.

JavaScript has a very simple object model.  An object is essentially just a set of key/value pairs called properties where the keys are string values.  In addition each object has an “inherits properties from” relationship with another object.  Some users and some implementers actually think in terms of a slightly more complex object model where in addition to string key/value pair properties an object may have indexed properties where the keys are integers.  However, that elaboration isn’t really essential to the understanding of the JavaScript object model because such integer keys can be understood in terms of their string representations.

Developers of JavaScript implementations spend a lot of time designing ways to optimizing their implementation of the JavaScript object model.   The simplicity of the object model allows for a very simple implementation, but such simple implementations will typically have very poor performance.  In order to have excellent performance implementers need to develop mechanisms that optimize the implementation while still maintaining the JavaScript programmer’s perception of it simple basic object model.  These mechanisms typically include complex caching and analysis techniques that try to transparently eliminate most of the dynamic overhead of the object model.

The object model defines many programers’ understanding of JavaScript and it plays a central role  in the design of JavaScript implementations. For these reasons, any major proposal to extend JavaScript needs to be critically examined for its impact upon the existing JavaScript object mode.  If the extension requires a major change to the object model, it may be difficult for programers to understand and use.  For implementers, even seemly simple object model changes may require significant redesign of existing optimization mechanisms and the invention of new techniques.

When possible, it is probably better to try to support a new requirement by extending some existing characteristic of the object model rather than by adding something that is totally new and unrelated to anything in the current object model.  Such extensions are more likely to be understood by programmers and to most easily fit into existing implementation designs.  For example, ECMAScript 5 added the concept of accessor (getter/setter) properties to the object model.  It did this by extending what can constitute the “value” part of a property.  Similarly, the Private Names proposal for ECMAScript Harmony extends what can constitute the “key” part of a property.  Both proposal are similar in that they building upon preexisting object property characteristics.  They don’t add major new concepts to the object model that are not directly related to properties.

There may be future situations that justify the conceptual and implementation cost of extending the JavaScript object model with concepts that are not related to properties.  However, the likely benefit of such an extension needs to be very large. For that reason, I want to propose a principle that any designer of a JavaScript extension should use as a starting point.  Try to work within the basic key/value pair property and prototype inheritance design of the current JavaScript model. Only introduce new non-property concepts into the object model as a last resort.

ECMASCript 5.1 SpecificationThe latest JavaScript language standard, ECMAScript 5, was approved by the Ecma International General Assembly one year ago.  Since then it has seen rapid adoption in new browsers releases.

Once approved by Ecma, ES5 entered a process to become an ISO standard. That process should be completed in early 2011.  The ISO edition of the ES5 specification incorporates a number of editorial and technical corrections including those listed in the current ES5 errata.

In order to keep the ISO and Ecma specifications in strict alignment TC39, the Ecma standards committee responsible for ECMAScript,  has prepared a revision to the ES5 spec. whose content is identical to the ISO version. It also includes a new Annex F that lists the technically significant changes incorporated into the revision. This revision will be known as Ecma-262, Edition 5.1.  We’ll probably just talk about it as ES5.1.

The final draft of the ES5.1 spec. is now available from the TC39 wiki.

Keep in mind that this is only a maintenance revision of the ES5 specification.  It contains no new language or library features.  TC39 is continuing its longer term work on “ECMAScript Harmony” which is intended to be the next version to include any new features.

(Note that I’m the project editor for the ES5 and ES5.1 specs.)

One of the goals for ECMAScript Harmony, the project to define the next versions of the JavaScript standard, is to make JavaScript a better language for writing complex application.  Better support for object-oriented encapsulation, information hiding, and abstraction should help JavaScript programmer deal with such applications.

Today, I’m going to talk specifically about a proposal for better information hiding in JavaScript.  As I use the term, information hiding means that an object should clearly separate its stable public interface from its private implementation details. Properties that exist solely as implementation details should be hidden from “users” of that object.

JavaScript today does not provide much support for information hiding.  All properties have public string names and there is no language mechanism to tag  public or private properties.  Some programmers use naming conventions as a very weak form of information hiding.  Programmers looking for a stronger form often abandon the use of implementation properties (and prototype inheritance) and use closure capture to represent implementation state.

When evolving a language to address some specific program, it is natural to look at how other languages approach that problem.  Many people are familiar with information hiding in C++ or Java and may assume that JavaScript should implement information hiding in that same familiar way. In those languages “private” is an attribute of a member (field or method) of a class.  It means that a private member is only accessible to other members of the same class. The fact that member definitions are encapsulated together as a class definitions is key to this approach to information hiding.

Java-like information hiding is not a particularly good match to the JavaScript object model where the structure of an object is much more dynamic and  methods functions can be dynamically associated or disassociated with an object and shared by many different kinds of objects.

Let’s look at a different approach to information hiding that may be a better fit to JavaScript. In this approach “private” is an attribute of a property name, rather than of an actual property.  Only code that knows a “private name” can use that name to create or access a property of any object. It is knowledge of the name that is controlled rather than accessibility to the property.

Let’s look at how this might look in code.  First assume that a private declaration creates a unique private name and associates it with a local identifier that provides access to that unique name:

private implDetail;

The local identifier can then be used to create object properties whose “key” is that unique private name and also to reference such properties:

function MyObj() {
   private implDetail;
   this.implDetail=42;
   this.answer = function() {return this.implDetail};
}

Remember that the name of the property created above is not actually the string "implDetail" but instead it is a unique value that was created by the private declaration. The identifier implDetail is just a lexically scoped handle that is used to access that private name value. So:

var obj=new MyObj(); alert(obj["implDetail");//"undefined" alert(obj.implDetail);  //"undefined"
alert(obj.answer());  //"42"

For the first alert, the message is "undefined" because obj does not have a property with the sting name "implDetail".  Instead it has a property with a private name that does not match that string. In the second alert the statement is not within the lexical scope of the private declaration  (within the constructor MyObj) so the property identifier implDetail does not correspond to the private name.  Finally, the third alert calls a method which does have access to the private name and it can return the value of the property.

This is a very powerful mechanism for adding information hiding to the JavaScript object model.  By following various usage patterns it is possible to implement the equivalent of both instance private and “class” private properties and also the equivalent of C++ friends visibility.  It can also be used to make extensions to built-in objects such as Object.prototype in a manner that is guaranteed not to collide with independently developed code that might try to make similar extensions.

A complete strawman proposal for this style of information hiding has been prepared for EcmaScript Harmony. It covers many more the technical details and provides many more complete examples of how the feature could be used.  Take a look and let me know what you think.  We’ll be discussing it in the standards committee but I’d like to get feedback from a broader range of JavaScript programmers. Does this proposal address your information hiding needs?  Does it fit well with the language as you use it?

Please, No Browser Monoculture

Dave Mandelin has a nice post responding to the Google V8 team’s new Crankshaft additions to their JavaScript engine.  Good reading, but all pretty much what you would expect in the currently highly competitive world of JavaScript implementations.  What really caught my attention was a comment by  RH Ryan that, in part,  said:

Call me naive, but why can’t you guys all get together and work on 1 common ECMAScript engine. It seems like a huge waste of human intelligence for there to be 3 major open-source Javascript VMs. The Chrome Team have gone ahead and said “OK Fair Enough, we don’t actually want to re-invent the wheel with another DOM renderer, let’s use WebKit”. It won’t affect Firefox’s popularity in the slightest, and you can spend time working on making either V8 faster, or making Firefox itself better!

I admire your work and am not trying to diminish its impact. It would take time but I think both projects (and Safari if they were amenable to the idea) and their combined millions of users would benefit from getting your collective compiler/VM/JIT-expert heads together.

So, sorry Rusty, but I am going to have to call you naive. Competition is good.  It’s what drives innovation whether you are talking about commercial or open source software development.  Software monocultures stifle innovation, even when they are open source based.

Even with the best of intentions such a combination couldn’t innovate as fast as what we have been seeing going on with JavaScript implementations for the last few years.  There are always competing technical approaches for solving hard problems like building a fast JavaScript engine.  It seldom is clear which approach is best (if best even exists).  But to actually ship software you have to make a choice and stick with it.   Sometimes you make the wrong choice and just have to live with it for a while.  Often you won’t even know if you made the wrong choice unless somebody demonstrates that an alternative would have been better.  Multiple independent teams working on competing JavaScript engines allow technical alternatives to be experimented with in parallel.  We all get to see which approaches work best and when the code is open source we can all benefit from the most successful efforts.

So no, we shouldn’t wish for a single universal JavaScript engine.  Along the same vein, we should wish for more competition to Webkit as a core browser building block.   The last thing we need is a browser monoculture.