#SeachangeK12: The Evolution of the User Interface

Posted by Matt Berringer on June 2, 2015

evolution user interface header-seachange

It once took a computer hours or even days to complete a simple task. Now, computer software can predict what you want to type from just the first letter. And the way we interact with today’s version of this tool—which has become integral to the way we do business, support education, and communicate with others—is both oddly similar and very different from the way we interacted with its earliest predecessors.

Our technological world has developed rapidly over the past century. The ancestor of the computer keyboard, the typewriter, was invented in the 1860s and quickly became an invaluable office tool for data recording and professional communication.

By the 1980s, word processors and personal computers began to displace typewriters in the Western world and were utilized for personal communication as well as professional. Through vast modern technological advances, the keyboard remains a vital tool for facilitating interactions between humans and computers.

Today, even the most cutting-edge smartphones and tablets feature adapted keyboard options for data entry and communication.

From the Click of a Mouse to Tap of a Screen
The first pointing device prototypes began popping up in the 1960s and were dubbed “the mouse” because the cord resembled the tail of a small rodent. Early models with rolling balls often required the use of a mouse pad to create enough friction for optimal performance.

Most subsequent optical and laser models have advanced beyond the need for a pad, and cordless or wireless versions now connect to computers via Bluetooth, WiFi, or USB port technology, further streamlining the point-and-click process.

Evolution-of-the-User-Interface-Infographic_5-8-15a_SGK12The mouse is still used with many of today’s desktop and laptop computer models. However, the invention and commercialization of multi-touch technology, which dates back to 1982 but wasn’t commercialized until much later, made it possible for users to interact directly with the display screen on certain devices. The first-generation iPhone, released in 2007, was built and designed around this multi-touch technology and virtual keyboard. Today, a variety of game consoles, personal computers, tablet computers, and smartphones utilize these features.

Like external devices, visual space in which humans and computers interact has also drastically changed—evolving from rudimentary to intuitive over the lifetime of the computer.

From the 1940s through the 1960s, computing power was scarce and expensive and, as a result, no emphasis was placed on building extravagant user interfaces. In fact, humans didn’t truly interact in real time with these early model machines because tasks could take hours or days to complete. The process of inputting data into these machines often was tedious and error prone.

By the late 1960s, computers were gaining speed and could now complete requests in drastically reduced time, allowing users to explore and interact with the machines more than ever before. The burden was still on the user, though, to invest time in learning to communicate with the computer to get the desired results.

That changed when graphical user interfaces were popularized by the Apple Macintosh in 1984 and by Microsoft Windows 1.0 in 1985 and basic design standards, such as pull-down menus at the top of the screen, were developed. These interfaces were easier to understand and more intuitive to use, creating familiarity and consistency for users who switched from one computer’s interface to another.

Many of the most recent advances in computer technology center around creating additional opportunities for giving the user the simplest, easiest, and most fulfilling experience possible. In some ways, our computers, tablets, and smartphones are trying to read our minds by suggesting and correcting our communications in real time. First came spell check, alerting the user to potential misspellings within files or emails. Later, autocorrect began to instantly correct these spelling mistakes, as well as other formatting errors, such as capitalization. The user also can personalize the replacement list, making it possible for a person to type in an individualized shorthand.

Next, autocomplete, or word completion, began predicting the rest of a word a user is typing based on universally common words or searches, as well as the learned patterns or words an individual uses frequently. Many of these algorithms even learn new words after a user has written them a few times.

Because people read faster than they type, autocomplete can save users time. Web browsers use autocomplete to suggest websites a user regularly visits. Email programs use it to fill in the intended recipient’s address. Search engines can almost instantly autocomplete a user’s query with one or more suggested popular searches.

Similarly, data validation featured in certain computer software programs helps guide the user to input data, such as phone numbers or birth dates, cleanly and correctly. For example, they might allow only certain characters to be used in a given section or they might check for consistency among various inputs. In these ways, the computer has become nearly as active as its users.

As the Internet and access to it have grown, so has the desire to access its information on a variety of devices. However, until recently, computing power and broadband access were limited on mobile devices. This led designers to craft mobile sites designed specifically to meet the needs of mobile users.

Perhaps one of the most exciting elements of the modern user’s experience is the recent push toward responsive design, which aims to craft sites with optimal viewing from a wide range of devices. A site designed responsively adapts the layout to a variety of computer, tablet, or smartphone screens using fluid, proportional grids and flexible images. Responsive design has the potential to replace the need for separate mobile versions of websites or site-specific apps.

More Technology of Tomorrow
In the future, computers might not just try to predict or correct our technological experience; it’s possible they also will help augment the world around us.

Virtual reality and 3D technology have the potential to immerse us in vibrant, lifelike environments for work and play. Military trainees use head-mounted displays to practice and prepare for a variety of potential combat scenarios. Doctors and nurses can utilize the technology to practice a wide range of treatment techniques. And of course, virtual reality technology can change the way we play video games or watch movies and television.

In addition, augmented reality technology has the potential to add computer-generated sensory images, graphics, sounds, or GPS data over our field of vision. It’s possible that further advances in this technology will enable information about the world around us to become interactive and digitally manipulated.

And, in the same way, touch-screen technology allowed us greater, more direct control over our digital lives, motion control has the potential to take us even one step further. Companies like Leap Motion are laying the groundwork now to create products that users can manipulate with simple motions of our own hands.

The wave of the technological future could see us creating, exploring, playing, and communicating without ever touching a thing.

Want to learn more about the sea change coming to student information management? Visit our website at THIS LINK.

Topics: #SeachangeK12