Future computing interfaces

Computing has changed the way we know it. From its origin, the computer has undergone several improvements for the betterment of the human race. Right from helping America calculate its census in the 1800s to becoming a game-changer in WWII, computing as a concept has come a long way. Some give credit to Babbage and some give credit to Turing. However, each genius who has contributed to computers is responsible for the sophisticated machinery that we use today. From our laptops to desktops; tablets to mobile phones; and ATMs to movie kiosks, everything is a computer.

However, when the computer was built, no one was really paying attention to the method of input which helped us communicate with the machine. As time moved forward, the evolution of the machinery introduced to the world a variety of input devices one could choose from. Just because it’s now easy for us to simply glide our hands over our touchscreens or talk directly to our devices, it is very easy to forget how far we’ve come. Right through punch-cards to motion detection, let us trace the where we came from and where we’re going.

One punch at a time:

In early 1800s key-punches and punch-cards were popular methods of entering, storing and retrieving data. Although switches and dials are considered earlier methods of feeding information to a computing device, popular key punches or punch-cards, like the Jacquard loom cards, gained consumers and had a widespread usage. This groundbreaking input method, as mentioned before, helped compute USA’s 1890 Census. The keypunch cards for the census computation were created by Herman Hollerith.  The cards had 80 columns each, as there were 80 questions assigned. The company Hollerith then created carried on his legacy, which later became part of IBM.

The key to typing:

The keyboard, primarily an evolution of the key card, made for a humble input device which we use even today! It involved the simple task of re-keying statements. This was a huge advantage over the process of discarding and reinserting of cards into the system. The initial design of the keyboard inherited the QWERTY layout from typewriters. We are still heavily dependent on a keyboard, physical or digital, to type in words, texts, tweets or even a full-fledged thesis. However, typing out everything is a tedious and sometimes taxing a feat.

A popular rodent:

Bill English and Douglas Engelbart created the first mouse prototype in the 1960s. They worked on it at the Stanford Research Institute labs. The mouse was actually a developed version of an ensemble which involved a rolling ball and multiple rolling discs. Whenever the ball rolled, the movement created signals for the device to detect. Each contact fed the digital computer information with which it could plot the pointer’s position on the display.

It was many years later that the “mouse” scurried around the globe, become a widespread phenomenon. Optical mouse became affordable and most of them came with a button or three. Further enhancements even saw the scroll dial. However, the leap occurred when the mouse had now no tail, became wireless and even squeezed into the form of trackpads which we use in laptops now.

The touch era:

It is shocking to know that the touch feature of interfaces was in progress much before the invention of the mouse as an input method. In the 1950s and 60s, such devices were used for graphical data input. Around 1993 a notebook-sized touchscreen device entered the market which encouraged the use of styluses.  Unfortunately, the company later shut down. The usage of the stylus was soon trumped by multi-touch touchscreens which were extremely sensitive and detected ‘pinch and zoom’ gestures. Soon long-press, press and drag, and swipe to scroll also became common practices. By the time we were sprinting towards a new millennium, the touch devices became smaller and more affordable fitting into gaming consoles, smartphones and tablets.

Recent developments:

Voice-based input in the form of voice recognition and voice command has become an important method of communicating with the machine; be it, Siri, Alexa or Cortana, we are now trying to harness the untapped potential of a computer. Similarly, virtual reality seems to be the next big interface breakthrough. A novel method of interacting with computers makes it stand out as an input device. Like virtual reality, there are several innovations which can sense hand gestures as well, one of which we will address soon.

Current breakthroughs:

Founding director of Penn State’s Center for Global Business Studies, Fariborz Ghadar, says: “[The mouse] goes back 50 years, so it’s getting pretty old.” This holds true for several existing orthodox input devices which have stayed in the market for far too long. The same can be said about touchscreens considering the rate at which science is making progress. We will discuss this further in the next section to come.

Meanwhile, Human-Computer Interaction technology has arrived and is here to stay. The stories surrounding this method of interaction between humans and computers are making the news. Unlike its predecessors, the requirement of external devices, such as keyboard, mouse and even a display, is overcome. HCI enabling devices, which combines sensors and machines intelligence, outdo multiple bulky devices with tangled chords and inefficient power usage. Users will again, unlike their predecessors, be able to communicate with computers via intuitive actions; very much like their everyday human behavior, which the computer will be able to understand. This is known as a natural user interface or NUI. One must’ve have interacted with computers through interfaces which allow typing, clicking, speaking and touching. Cybernetyx, a German company, is currently busy enabling the computer to see! They are currently a leading HCI and NUI solution provider in the world.

The end of touch:

What new technologies have traversed the past and are populating the market you ask?  Within the next few years, clunky input devices and other such paraphernalia will retire into basements and closets. One may harp about virtual reality or voice recognition. Fariborz Ghadar emphasizes on how “the next logical thing is for the mouse to disappear and to have a voice-activated computer.” One can even claim that before voice recognition finds its footing with proper training and becomes a household phenomenon, touchscreens, as we know them, will be overthrown by new technology in the near future.

Now let’s look out a little further into the future. Futuristic computer input methods are already set rolling. Cybernetyx is already working on a conversational UI. This interface does not only perform tasks by transcribing your words, it even considers the decision from a human’s perspective. A computer which understands your context; not too bad for an ambitious HCI technology, right?

If that’s not awe-inspiring enough, Cybernetyx is also developing neural interfaces, like Neura, for computers. With Neura, users can perform tasks through just a thought! The computer doesn’t need any physical gesture, you just need to think for the computer to execute the desired task. If you’re impressed, now is the time to talk about multi-modal UI. With multi-modal UI you can interact with your devices and use any of the latest input methods to do so – speech or gestures.

Within a few years, computer users will move on and get accustomed to a world not dominated by keyboards and touchscreens. Meanwhile, technologies which are in their nascent stages but are immensely promising – whether it’s voice recognition, neural interfaces or multi-modal interfaces – will gain accuracy, mature, and become affordable to all.