Javascript Frameworks Are Too Small

by Gordon. 5 Comments

A while back I stumbled upon a great post by Jean-Baptiste Queru. It describes the incredible depth of the modern technology stack. Layers upon layers of complex science, hardware, and software, each layer creating a simpler abstraction around the previous. This ultimately enables our paltry human brains to create amazing things that would otherwise be impossible (or really difficult). This is, in my opinion, the lifeblood of modern software development.

For some reason, however, when it comes to front-end web development – meaning javascript – the stack is extremely shallow. Most websites are built on top of native browser functionality with a sprinkling of jQuery and little else. Moreover, this is true even to the extent that it is embedded in the developer culture itself. Javascript frameworks are lauded for their small sizes and even smaller feature sets. Sites exist exclusively to categorize and compile lists of these “micro-frameworks”, exacting requirements of less than five kilobytes.

This is not to say that significant value can’t be created by a framework with a small codebase. However, given the choice between two frameworks with equally well-written code, I would probably opt for the larger framework1. Choosing a framework for its small size is a premature optimization. Taking this a step further, given a choice between tying together two unrelated “micro-frameworks” and one larger framework, I would definitely opt for the latter.

Tom Dale begins a similar post with the following:

Why does it take big teams, big budgets, and long timelines to deliver web apps that have functionality and UI that approaches that of an iPhone app put together by one or two people?

Although I’m not going to comment on the number of people required in either case, I completely agree with the implicit assertion that mobile development is more efficient. As a developer who has been building desktop, web, and mobile applications for years, I have always felt that, specific to web development, a larger amount of energy goes towards dealing with the frameworks involved, rather than the problem being solved. There is also an uncomfortable, almost nauseating, feeling that my code is not as modular and reusable as I would like and have come to expect from other development stacks.

The reason for this is that javascript frameworks are simply too small and unstructured. Client-side web developers are not building atop strong enough abstractions to bring their efficiency up to par. Even backbone.js, the web’s darling client side javascript framework, weighs in at a mere 4.6kb. Having built an application against backbone, I can attest to the fact that it more closely resembles a philosophy or set of guidelines to develop against rather than a full fledged UI framework.

Yes, I know larger javascript web frameworks exist. SproutCore, Cappuccino, and Google Web Toolkit come to mind. I hear good things about all of these frameworks. However, none of them have come to the level of ubiquity that one would hope. They all suffer from similar ailments: being constraining and forcing a particular native-like paradigm. For instance, there is no reason for a web framework to implement it’s own layout manager. HTML 5 is probably the richest layout system in existence and most modern heavily-designed web applications prefer direct access.

I am constantly searching for a modern client-side javascript framework that has the right level of abstraction. Today, I am extremely excited about Ember JS (formerly SproutCore 2.0). It adds some deep layers of functionality without the constraints and bloat of the original SproutCore. I think it will continue to evolve into an amazing framework. For the future, I am excited about initiatives such as DART. Despite receiving a lot of negative criticism, DART looks promising as a new language for the web (although I would have preferred a raw VM). Particularly, I feel that the awkwardness around implementing packages and re-usable code in Javascript is partly to blame for the current state.

In any case, as web applications continue to become increasingly complex, the emergence of larger and richer frameworks is inevitable – and it’s about time.

1. It is important to clarify this choice with respect to size and horizontal bloat. Here I am using the term larger to denote a framework which has a stronger abstraction/more utility for what I am actually using it for (as opposed to a bunch of unnecessary features).

Design Tools Are Broken

by Gordon. 13 Comments

Spending lots of time both coding and designing has given me an acute awareness of how poor design tools are. Design itself has come a long way, but the design process has not. As a disclaimer, most of what is about to be said applies to user interface and graphic design, as opposed to illustration and the creative process.

The culture around design is primarily focused on the end product and not the process behind it. Photoshop, the industry standard for user interface design, is pixel-centric (limited vector support being a later addition). Trivial things such as rounding corners or changing resolutions are often non-trivial to accomplish. Re-usable assets, such as those found on Smashing Magazine, often consist of static images, hopefully in multiple resolutions. Cascading stylesheets, the lifeblood of web design, are exceedingly repetitive and usually degrade into an unmaintainable soup.

For comparison, the culture of software development largely evolves around getting things done cleanly, efficiently, and maintainably. Entire programming frameworks and communities arise around the idea of making the process of doing something developers are already able to do (e.g build websites) simpler. Existing bodies of code are re-factored entirely to be cleaner, despite having less features. New languages emerge with the goal of making software easier to develop. Re-usable libraries exist for almost every language and programming task imaginable.

The stereotypical developer is horrible at graphic design, but much of this is due to an impedance mismatch between software development and design practices. From the perspective of a good developer, many of the techniques that are required to create good design are appalling and contrary to their core philosophy.

Massive Violation of DRY

From Wikipedia:

Don’t Repeat Yourself (DRY) or Duplication is Evil (DIE) is a principle of software development aimed at reducing repetition of information of all kinds…

A rule of thumb for this is if you find yourself doing the same thing often, there is probably/should be a better way to do it. Whether I am writing CSS or creating layouts and graphics in Fireworks or Photoshop, I repeat myself all the time. As a response to this for CSS, developers have created tools such as SASS and LESS. The elephant in the room, however, is graphic design. Accomplishing commonplace visual effects such as curved drop shadows and glossy buttons usually requires following multi-step rote processes every time they are needed.

Poor Extensibility and Reusability

Perhaps causally related with the violation of DRY principles is the lack of ways to extend graphics design tools and reuse existing graphical designs.

In the case of software development, common design patterns and features can easily be modularized and re-used in various projects and components. Existing frameworks are also designed with plugin architecture’s in mind. The prevalence of rails extensions, for example, has made creating web applications with Ruby on Rails amazingly easy, far beyond what the original framework intended.

Graphic design tools are quite the opposite. One could imagine things like glossy buttons, advanced shadows, and other effects being easily scripted or packaged into custom filters, but it’s not so in reality. Despite having limited scripting support, the really useful plugins for tools like Photoshop and Fireworks were all built years ago in native code. These tools are used by thousands of programmers and designers, but the ecosystem around extending and contributing to these products is almost non-existant. I have personally put immense amounts of effort towards scripting Adobe Fireworks, but this has simply made the limitations more apparent.

CSS3 has actually made great strides in this area, but it will never be a complete substitute for graphics design. I can imagine a future ecosystem consisting of a robust open source graphics editing program(s) accompanied by a plethora of plugins, filters and assets freely distributed and written in a modern scripting language.


Finished designs are often immutable and hard to change (and this has nothing to do with the good type of immutability). Source .psd files often consist of layers of flattened pixels, the exact combination of actions which created the layer lost forever. In many cases, this starkly contrasts with the underlying software being designed, which is in a constant state of flux and improvement.

The software equivalent of this would be software development tools which produce raw binaries without source code, the only possibility of change through a finite number of parameters. The natural solution would be to have graphics software which preserves the entire process by which an image was created. For this reason, I personally tend towards Adobe Fireworks, which is slightly better at this, dealing with items at the object level rather than the layer level.

Another solution would be to have the underlying format for the source of an image be an imperative graphics description language, e.g. Degrafa. SVG is theoretically nice, but is poorly implemented and monolithic, but still might ultimately be the answer.

Lack of Version Control

Binary source files don’t mesh well with version control. Merging conflicts is near impossible. Viewing and understanding the history of a file is not easy. I have seen workflows where all the design assets are passed around in shared folders with no version control at all. In the developer world, this would be almost unheard of. This also translates to the web, where design assets are clumsily distributed and the techniques used to create the asset are usually lost.

Reliance on Proprietary Tools

Almost no one would argue with the fact that Adobe Photoshop is the industry standard for graphic design, supplemented by Illustrator and sometimes Fireworks. Other open source tools such as GIMP and Inkscape are slightly sub par and suffer from the same lack of innovation. This state of affairs has been relatively the same for the last decade or more.

Not only does this restrict innovation, but it also restricts the platforms able to do graphics design. This prevents many developers and designers from using operating systems like Ubuntu as their primary OS. Moreover, I would argue that this has contributed to poor design in many OSS applications. Witnessing open-source software transforming the landscape of software development, one can only hope that the same will eventually happen to design.

Living Untethered

by Gordon. 2 Comments

For roughly two and a half months I have been without a mobile phone of any kind. Two months might not sound like much, but this is coming from a software developer who is entrenched in technology and has even built multiple Android and iOS apps. Before this, I always carried a phone and was in a constant state of sync. The event which prompted this experiment was the loss of my Nexus One during a trip to Las Vegas–the exact details of which aren’t important, but suffice it to say the trip was a success.

My hope throughout this was that I might be able to come to some insight or achieve a revelation about the significance of always being connected. Perhaps by depriving myself of something I had grown to take for granted, having a powerful computer in my pocket, could inspire me as a (mobile) developer. Unfortunately, the best I could come up with is this: having a phone doesn’t really fucking matter. For people who work in front of computers, in this day and age, your phone is a luxury device, and not having one is no big deal.

Sure, I definitely did miss out on a few candid photos of my friends, some of my amazing lunches went undocumented, and I definitely could have benefited from Google Maps from time to time, but fundamentally nothing about my life changed for the better or for the worse. I am still wired to a computer majority of the day, my brain is still synced to my GMail inbox, and aside from short periods of commute, the internet is still readily accessible if needed.

I was also hoping that I could write about how not having a phone could reduce stress. The physical constraint of not being able to check your email several hours a day does relax your attitude, but I still felt no perceptible difference in my stress level. Instead, for the rest of this post I will share some random tidbits I gained from this experience:

Google Voice is the Shit

For $0.00 dollars a month I have the same mobile phone number I had before as well as unlimited calls and domestic texts. This is all done through Google Voice after porting my phone number. From the perspective of all of my contacts, I still have a phone and nothing has changed. Of course, I need access to a computer to return calls, but this is not a problem for me, especially with a MacBook Air slung over my shoulder for most of the day. It really is only a matter of time before mobile VOIP clients usurp voice and text plans entirely.

Moreover, using Google Voice for calls and texts is actually a lot better than the traditional phone counterparts. Voicemail transcription definitely helped screen recruiters. I am also doing myself a favor by typing out texts in an instant message-like interface rather than constantly using a touch keyboard. In retrospect, it is actually slightly amusing to think of myself responding to texts on my phone while sitting in front of a full keyboard, as I often did. The same goes for email. Even if you have a phone, it is worthwhile to go through the cumbersome porting process, effectively making your phone and computer interchangeable.

Touch Interfaces Are Sexy and Easily Forgotten

While not having a phone, there wasn’t a single app that I longed to use over it’s desktop equivalent. Desktop apps aren’t quite as sexy, but they definitely work. I might be singing a different tune if I were really into the mobile gaming scene or consuming certain types of content as thats where I feel most of the innovation is taking place (even though Angry Birds is now in the browser).

Phones and computers are converging, and phones are starting to feel like shitty computers. (Although, I really hope I eat these words when NFC becomes prevalent). This is especially apparent when my Macbook Air is sitting next to my iPad 2 on my coffee table. It is only slightly larger, an order of magnitude more useful, and usually the first to be picked up when someone wants to browse the web. That said, just to be clear, I still do believe that a properly done native iOS or Android app focused on consuming content can still rival anything out there and can appeal to a wide(r) audience.

I Just Bought a Phone

I’m sure I slipped throughout this post and referred to my not having a phone in past tense. That is because yesterday I finally bought myself a replacement phone, an HTC Sensation, not because I needed it, but because I wanted it. I’m looking forward to being able to tether my air again.

Windows 8 Isn’t That Bad

by Gordon. 8 Comments

There seems to be a large amount of backlash directed towards Windows 8 due to the fact that it’s new UI exists alongside the old Windows desktop. I must confess that I was slightly shocked myself to witness the context switch take place on the video, but I was shocked in a good way. Ironically, the reason why I think it’s good to have a Windows desktop alongside a touch UI is based on the same reasoning as why I switched from Windows to OS X in the first place.

I am typing this right now on my latest generation MacBook Air. The reason why I switched to OS X was because it was the best of both worlds. I could have a Unix shell, Adobe products, as well be able to use XCode to develop iOS applications (which you can’t do on Windows, but that is a separate issue). I could do everything Windows could and more. Analogously, choosing between Windows 8 and iOS for a casual user is very similar.

I also have an iPad 2 and an iPad before that. People, mostly family, are always very attracted to my iPad and frequently ask me if they should buy one to replace their existing aging laptop. My answer is always a resounding no. Were that to happen, they would inevitably call me and ask how to open Office documents, view flash websites, and do all the other countless things that an iPad is not suited for.

Windows 8 solves this problem. Sure, there will be an initial period where some apps will only be available in desktop form. But, at least there will be a less elegant way to do things that otherwise couldn’t be done. Additionally, this will only result in low hanging fruit for developers. If the metro tiling interface takes off, consumers will prefer a metro based application and developers will build one. Similarly, Android’s additional OS-level features (at the sake of battery life, some might say) are one of the reasons why I prefer it over iOS.

Another argument against Windows 8 is that of market and developer confusion. There is no market and developer confusion. Apple has already set the precedent. Consumers and developers both know what they want and what to build respectively. The capability is there and the developer who builds what the consumer wants will win.

Based on WP7′s market performance, who knows what will happen. My argument here is not to say Windows 8 is a winner. It is simply to say that the the arguments against it are misguided. I, for one, would prefer to be able to do more than less. One thing that is certain, however, is that having more competitors and innovation in the mobile/touch landscape is a good thing.

What It’s Like to be Recruited

by Gordon. 21 Comments

252 Recruiter Emails

Roughly three months ago (in the beginning of March), for a variety of reasons, I decided to put my resume out there on the interwebs. Here I chronicle my experience being a software developer on some of the most popular and widely used job channels.


For some context, I was doing research on an idea I had (now GroupTalent) and was also willing to entertain the idea of flexible interesting mobile projects. My resume included the following:

  1. B.S. in CS and Math
  2. SDE at Microsoft
  3. YC Founder (Team Apart S’08, now defunct)
  4. Misc. consulting
  5. Independent app development (iOS and Android)

My objective read:

Seeking freelance or short term contract iPhone and Android development positions.

I posted this resume on Monster and CareerBuilder. I had also previously created a profile on StackOverflow careers and GitHub jobs. Additionally, and importantly, I had indicated that I would be willing to relocate.


To relate my experience, I will begin with some numbers and then move into a more anecdotal portrayal.

Over the course of the roughly three month period after posting my resume, I diligently labeled all incoming emails from recruiters in GMail. Thankfully, I also use Google voice, and was easily able to identify and count calls and voicemails from recruiters. The numbers I am about to give exclude the numerous automatic emails sent from these sites; they all represent contact from actual people (or at least present themselves as so). The screenshot at the beginning of this post would suggest that I had received 252 emails, but this number is from when I began drafting this post roughly a week ago.

As I write this, all in all I have received: 266 emails and 96 voicemails. This roughly equates to 12.7 emails and 4.3 voicemails per workday. There were also some additional calls that I actually answered or that didn’t result in a voicemail. My Monster.com profile was viewed 261 times and “saved” 37 times. My CareerBuilder.com profile showed up in 343 searches (presumably by employers), and was viewed 31 times. My profile on StackOverflow careers was viewed by employers a whopping 1 time and had 3 search hits. GitHub jobs doesn’t appear to reveal any data of this kind.

The emails varied immensely in personalization and adherence to what I was actually looking for. My CV’s objective of short term Android and iPhone projects functioned as a mere leitmotif or not at all. My overall impression was that many recruiters simply do blanket keyword searches for terms such as “java”. Interestingly enough, many recruiters reached out to me on the premise that they found my resume on other sites such as Dice that I had never even created a profile on. It turns out that most recruiters do not even interface with the job sites directly, but instead use 3rd party software which crawls all the job boards for them.

Employers ranged from small startups to large corporations, the average being somewhere in between. The companies also included the likes of desirable A-Companies such as Amazon and Zynga. The split of jobs that were local and those which required relocation was about half and half with perhaps a few more on the relocation side.

Most recruiters were either head hunters or part of 3rd party staffing companies, but many were internal recruiters as well. For the first week, I actually answered all incoming calls, but this eventually became unmanageable. I used the opportunity to hear them out and also sometimes give them a reverse pitch on GroupTalent for feedback. Some recruiters were actually extremely savy people who wanted to build a relationship with you. Others were pretty abrasive. My favorite conversation was with the recruiter who actually suggested that I take a job at a mega corp while I still could since everything was going to be outsourced in the near future anyways.


According to Joel Spolsky, most good developers will never even be exposed to this situation since they will never be on the market. Combine this with the fact that everyone sucks at hiring and you have an industry that is basically a crap shoot. I also wonder if companies realize that many of their candidates are acquired through pseudo-spam.

In the interest of full disclosure, I actually have used Monster a few years ago and did wind up with an excellent consulting gig that was very flexible, but my experience was similarly noisy. I consider myself at least a decent developer and believe that good developers are on the market or are at least willing to entertain new opportunities. I predict that in the coming years the demand for top talent will be even higher and companies will need to resort to new ways to find and incentivize developers. While the experience I have presented here can vary, especially for new grads and developers travelling through reputation or word of mouth, my goal here was simply to give some perspective.

What is your experience being recruited?

Five Reasons Why I Use Android and Two Reasons Why I Develop for iOS

by Gordon. 23 Comments

Being both a mobile developer and an avid phone user, I have two somewhat different perspectives. As a user, over the last several years I have owned a multitude of mobile devices: G1, Nexus One, IPhone 3G, IPod Touch (4th Gen), IPad, and IPad 2. As a developer, I have a combined 13 apps in the android market and app store (all independently developed and released).

Why I Use Android

Despite the IPhone 4 having admittedly better hardware (damn that retina display is nice) I much prefer Android devices. The reasons have everything to do with software:

1. Multitasking

“Multitasking” on iOS is a joke. I’m speaking right now from the perspective of a user, but trust me, I also truly know this, having been in the shoes of a developer. Notifications are horribly presented in modal dialogs; in situations where I have a large number of notifications, usually all but the last one shown to me are lost. I also desperately long for an IM client which I can use on my IPad which I can naturally interact with while using other apps. No such app exists since all the apps are forced to go through the cumbersome notification system. On Android, as in a desktop operating system, applications can truly run in the background; on Android, IM can be almost indistinguishable from texting (which is coincidentally also better than iOS).

2. Intents

Android is an intent based operating system. What this means from a user’s perspective is a richer more deeply integrated experience. If I am browsing the web and click on a link to a product on Amazon.com, the context will switch and the product will be opened in the Amazon app. On iOS, clicking that link would just result in the link being opened in the browser (often times losing the context of the originating application). Android allows apps to have a deeper and more natural hook into the operating system and user experience. For example, in the coming years, when Google Voice finally gets a true VOIP client, it will be able to seamlessly replace the default calling application.

3. Back Button

The back button is a killer feature and is way more than just a physical button. The android operating system is essentially stack based. Going from the above example of clicking on an Amazon product link: after the Amazon app is opened, I can intuitively press the back button to return to the application from which I clicked the link. I cannot count the number of situations on iOS where I lose a retrievable context of where I was inside of an application by clicking on a link. Nor can I count the number of applications which pop open browser dialogs when you click on a link as a hack to fix this. Imagine how ridiculous it would be if it was the norm for desktop applications to all use an embedded browser. Imagine how unusable it would be. The closest equivalent on iOS for the back button (that I know of) is double tapping the home button (or four finger swiping) to get a list of the most recent apps and then clicking on the app you last used. Lots of users don’t even know you can do this.

The menu button on Android is also very convenient (although not as vital) and saves lots of prime mobile screen real estate.

4. Apps

As a user, I never need to buy an application. Moreover and surprisingly, there are many apps on Android that simply have no equivalent on iOS. If I want to use instant messaging, free apps exist. This is the status quo. Not so on iOS. Also, there is a GMail application which actually has an intuitive interface. I am shocked by tech-savy people who use the iOS mail application with their GMail account in an outlook-like fashion. Things like Wifi and USB tethering are also built in to the Android operating system.

5. Navigation

Newer version of android have a turn by turn navigation application by Google which uses data from Google Maps. Although some might consider this a smaller feature, this is hands down the best navigation application I have used and has rendered my Garmin navigation obsolete. I use this all the time. There is no equivalent for iOS, even though some apps exist in the app store with double digit price tags.

Why I Develop for iOS

Despite learning mobile development on Android and also preferring Android’s development framework and tools, I have shifted all of my app development to iOS (at least for first releases). One of my most recent games, Word Topple, is only available on the app store. The reasons have nothing to do with software:

1. Revenue

iOS has a much more profitable app economy. Even though it is ridiculously hard to make a hit iOS app, an iOS application is much more lucrative than the same android application with the same number of users. On iOS, users expect to have to pay for applications and they do. Each user of an iOS application is also much more valuable than their Android counterparts.

To back this up with some data, I will reveal some numbers on one of my games, BeWorded (in the Android Market and in the App Store). Both versions are identical ports and have never been marketed. They have also both not been updated in months. The primary source of revenue for this app is through AdMob advertising. Interestingly enough, both versions have had almost exactly the same number of impressions (~1.5 million). The iOS version has a CPM of $0.30 and the Android version has a CPM of $0.08. The iOS version is roughly 4x more profitable with an identical user base. Some of my other Android apps have even worse CPMs.

2. Game Framework Maturity

Let’s face it, most user’s play lots of games on their mobile devices. Games are huge. On iOS, there are a number of game frameworks with very active communities, my favorite being cocos2d. Android, on the other hand, suffers from a lack of mature game frameworks. I first started writing games on Android and have tried virtually all of the 2D frameworks I know of. I wanted a full 2D scene graph and none of them were sufficient. I ultimately wound up using an incomplete port of cocos2d to android which suffers considerably in performance and in completeness, but is getting a lot better. This fact has resulted in iOS being a much better platform for developers and users with respect to games.

Thoughts on the Future

Android is a much more advanced and well though out operating system compared to iOS. It is open. Its applications are free. Even its development tools are more modern (which I will go into in another post). Game frameworks will eventually catch up and the communities are growing. The biggest leg up iOS has over android at this point is hardware and aesthetics, and that gap is closing fast and might already be closed. Unlike the laptop market, other handset manufacturers like HTC and Samsung are producing amazing devices. The IPhone was clearly superior when I was on my G1. After upgrading to a Nexus One, my phone was superior to my friend’s IPhone 3GS’, before the IPhone 4 was released. Next generation phones like the soon to be released HTC Sensation and Samsung Galaxy S2 are going to blow existing devices out of the water. The bar for applications on Android is also increasing and the truly good, sexy apps are floating to the top.

Although I currently still develop primarily for iOS, I expect that to change soon. All of the low hanging iOS applications have long since been built and more and more apps are becoming free to compete with the incumbents. I expect this to affect the profitability of iOS development and the expectations of users. Android is also gaining market share and truly has a better user experience. People are starting to openly prefer their Android phones to their friend’s IPhones. If I had to make a long term wager on a mobile OS, all my money would be on Android.

Flex 4 CSS Namespaces: Annoying Migration Issues

by Gordon. 4 Comments

I started playing around with the beta of the Flex 4 sdk the other day. The purported features of the new version are very exciting to me, especially the support for advanced CSS selectors as well as enhanced component skinning capabilities.

Based on the backwards compatibility with Halo, my initial idea was to migrate one of my existing side projects from Flex 3 to Flex 4 and then incrementally switch over to Spark components as development progressed. In theory this sounded fairly easy, but ultimately this required lots of overhaul to my CSS files and lots of additional bloat. This is due to a design pattern that I favor which heavily depends on reusable custom components and CSS type selectors.

The design pattern is simple: I create custom reusable composite components which consist of several other components and then I style those components in a separate CSS File using type selectors. For instance, I will have a component called UserView which, for simplicity, would just consist of a VBox component containing Image and Label components. This would then be styled in an external css file, style.css, as follows:

UserView { .... }

This component would then be reused in many locations throughout the application. This worked great in Flex 3, however due to Flex 4 requiring all CSS type selectors being qualified with a namespace, this breaks in Flex 4, resulting in none of the styles being applied. The workaround is to create a namespace inside the css file:

@namespace "com.mypackage.views.*"; 
UserView { .... }

Now it works. The problem is, however, in large projects I normally have a multitude of these components in lots of different packages. Thus, since the CSS namespace spec requires that all the namespaces be declared at the top of the file, I have a huge number of namespaces at the top of the file and each component needs to be qualified. E.g:

@namespace views "com.mypackage.views.*"; 
@namespace forms "com.mypackage.forms.*";
views|UserView { ... }
forms|Login { ... }

This is frustrating because not only does it bloat the code, but it also decreases the spatial locality of the code, requiring me to jump around to mentally resolve namespaces.

If my classes were inside of a flex library, I could give them all the same namespace, e.g. http://ns.mycompany.com/2009, but I cannot find a convenient way to do this for classes which are part of a normal flex project and not pulled in from an external library.

Flex 4 is still great and by no means is this a deal breaker. Other people have overstated this as an EPIC FAIL and others have oversimplified it as a one-line fix (which is only true if you are styling components from a single namespace). It would be nice if there was something like the following:

@namespace "com.mypackage.**";

Or at the very least:

"com.mypackage.views.*"|UserView { ... }

Avoiding the requirement of having lots of declared namespaces.

Google Chrome: Why I’m Not Excited

by Gordon. 13 Comments

There is currently a massive amount of buzz around Google’s recently launched browser, Google Chrome. And why not? It’s a new WebKit based browser with a clean minimalist UI and a purportedly super fast V8 Javascript interpreter. To top it off, each webpage executes in its own process and it’s made by Google! It’s the next step in the battle to legitimize web based applications as viable replacements for their desktop counterparts.

Why am I not excited? Adoption rates aside, the big reason is that Chrome’s V8 is not the kind of Javascript engine I would like to see in the next generation of web-browsers. Don’t get me wrong, I fully believe that V8 will be one of the fastest Javascript engines out there, but it’s in its optimizations where the problem lies. When V8 interprets Javascript, it transitions directly from Javascript source code to machine code. There is no intermediate bytecode format! This means that Chrome is designed around Javascript as the be-all end-all language. Chrome isn’t just bad news for Adobe, it’s bad news for developers in general.

In my ideal world, the next generation web-browser will have a generic, standardized bytecode format to which Javascript is compiled, but also such that it will leave the option open for compilers for other languages– a la the good old JVM.  Imagine if you could write the client-side portion of a website using your language of choice, e.g. Ruby, Python, or even Scala (my personal favorite/hobby language). I have high hopes that something like Mozilla and Adobe’s collaborative effort, Tamarin, will turn into this.

As the client-side portion of websites becomes increasingly complex, it also scares me to think that everyone will be monogamously tied to Javascript. Without letting this post turn into a language rant, I am obliged to say that Javascript is in desperate need of better tooling and debugging support; I also feel that Javascript is hard to use and maintain on large, intensive projects, but maybe that’s just me. I realize that the last few sentences sound like cookie-cutter arguments against dynamic languages in general, but I’ll save that for another post.

As of late, as an alternative to Javascript, I have been doing most of my client-side programming in Flash using AS3 and the Flex SDK, not so much for the additional features as for the streamlined development process. AS3 is based on ECMA script, on which Javascript is based as well, but the support for namespaces, optional static typing, and better IDE support (Flex Builder) makes me much more prolific. Let’s also face it: all the major web-browsers are totally dependent on Flash, if not just for serving video, and that’s not going to change any time soon. Google knows this and adds the flash plug-in to Chrome automatically. Ironically, it is Flash and other plug-ins which break Chrome’s pristine process model, since the grandfathered-in plug-in architecture requires less restricted access.

On the bright side however, I have been using Chrome as my default browser for the last couple of days and have had an overall good experience. In terms of UI, it is a nice incremental improvement over other browsers. Combining the search into the address bar more intelligently is extremely convenient and long overdue. I also really like having the tabs on the top of the page.

Flex ScrollPolicy.AUTO Not Good Enough

by Gordon. 21 Comments

One of my biggest gripes so far with the flex scrolling system is the automatic scrolling policy: a container’s viewing area is not changed by the introduction of scroll bars. This is especially troublesome in the vertical case and seems to be a pretty shitty way of doing things in general. For example, if the vertical size of the children increases beyond the height of their container, a vertical scroll bar will be created and displayed, however the children will not be resized. Consequently, if some of the children have variable widths (e.g. width=”100%”), they may be overlapped by the vertical scroll bar, causing a horizontal scroll bar to be displayed. When the scrolling policy is set to ScrollPolicy.ON, however, the children are resized to take the scroll bar into account at the price of the scroll bar always displaying, even when it is not needed. Yuck. I have heard several justifications for this. One is that it prevents a cascading resize effect where all the descendants of the container are resized, each one introducing a vertical scroll bar. Another is that it is faster due to the fact that it requires only a single pass on the children, whereas to do it the correct way would require sizing the children, checking if scroll bars are required, and then adjusting the size of the children based on the new size when scroll bars are introduced. In both cases the justification is entirely dwarfed by the current behavior. Fortunately there is a fix for this: create a custom container and add the following override for the validateDisplayList method:
import mx.core.ScrollPolicy;
public override function validateDisplayList():void
    if(null != verticalScrollBar && verticalScrollBar.maxScrollPosition == 0
        && verticalScrollPolicy != ScrollPolicy.AUTO)
        verticalScrollPolicy = ScrollPolicy.AUTO;
    else if(null != verticalScrollBar)
        verticalScrollPolicy = ScrollPolicy.ON;
This is a hack which toggles between ScrollPolicy.AUTO and ScrollPolicy.ON as needed to gain the desired behavior. It comes at the price of invalidating the verticalScrollPolicy property. Hopefully this behavior will be more tightly integrated into future versions of Flex (maybe another ScrollPolicy is in order?).