Hartley Brody

When Will We Stop Calling it "Mobile"?

baby-on-phone“Mobile” is easily one of the biggest buzzwords of the decade, and everyone is starting to grok it – from marketers and web developers to your parents and grandparents. But what do we really mean when we talk about “mobile” in a development context?

Usually, “mobile development” refers to developing native applications in Objective C or Java that take advantage of device-specific-APIs. In that sense, it might be more accurate to call it “iOS development” or “Android development,” since it’s really just development for a particular platform.

But sometimes the term “mobile” is used in a web context. What does it mean then? Usually, a few things:

  • Small screen size, no space can be wasted.
  • Constrained bandwidth, don't load anything that you don't need.
  • Slower processor, don't do any heavy rendering in the client.

However, these are design qualities that most networked application should strive for.

With the rise of high-density displays and powerful mobile processors, there are undoubtedly many “mobile” devices with larger screens, faster networks and beefier processors than many desktops from a decade ago. Today, your cellphone has more computing power than NASA did when it put a man on the moon in 1969.

“Mobile” is relative.

Transitional Technology

When new technologies are born, almost by definition they’re clunky and esoteric – out of reach to the average consumer. But as the technology evolves and becomes simpler, it often morphs and goes through a “transitional” period that’s not totally intuitive, but is easy enough to bring it to mass market, before settling into its “final” state where further innovation has diminishing returns.

A great example of a transitional technology is the mouse.

mouse-transitional-technologyWhen computers were first born, there were no graphical user interfaces. Any interaction between a human and a computer took place on the command line using special syntax and instructions. This made early computers far too difficult for the average consumer to use.

But then in the early 1980, Xerox, Apple and Atari pioneered the first “graphical user interfaces” which introduced the concepts of windows, icons, menus and pointing devices.

And so, the mouse was born: a simple way for anyone to move the pointer and interact with the elements on the screen. This made it much easier for the technology to come to market and businesses began purchasing workstations for their employees.

But the mouse was never really intuitive. If you moved it three inches across your desk, how far would it move on the screen? You had to watch it. And if you held the mouse slightly at an angle, moving it to the right might actually send the cursor moving diagonally. You were really pushing a small device around and making sure it stayed in sync with what was on the screen.

But now, we’re starting to see the rise of gesture recognition and touch screen technology. Now, you’re actually interacting with the elements as you see them on the screen, and they respond to your touch. Graphical user interfaces have become far more intuitive. A mouse no longer makes any sense, and the device will soon become obsolete to the average consumer.

I believe that desktop and laptops will eventually come to be seen as a transitional technology. Large, power-hungry devices with physical keyboards won’t make any sense as tablets and phones become more powerful. As mobile networks improve, you won’t need to be tethered to ethernet or even your home or coffee shop’s WiFi hotspot.

Smaller screens that fit in a pocket or bag will become the default form factor for any new technology (if they’re not already). It won’t make sense for the average consumer to own a laptop or desktop and they’ll eventually become obsolete.

Soon, all computing will be mobile, and we won’t need to call it that anymore.

Discussion on Hacker News.