Moving toward computing at the speed of thought

“Today, gesture-based interactions, using multitouch pads and touchscreens, and exploration of virtual 3D spaces allow us to interact with digital devices in ways very similar to how we interact with physical objects. This newly immersive world not only is open to more people to experience; it also allows almost anyone to exercise their own creativity and innovative tendencies.”

So says Prof Frances Van Scoy, Associate Professor of Computer Science and Electrical Engineering, West Virginia University. Read more about the ever-changing world of computing…

The first computers cost millions of dollars and were locked inside rooms equipped with special electrical circuits and air conditioning. The only people who could use them had been trained to write programs in that specific computer’s language. Today, gesture-based interactions, using multitouch pads and touchscreens, and exploration of virtual 3D spaces allow us to interact with digital devices in ways very similar to how we interact with physical objects.

No longer are these capabilities dependent on being a math whiz or a coding expert: Mozilla’s “A-Frame” is making the task of building complex virtual reality models much easier for programmers. And Google’s “Tilt Brush” software allows people to build and edit 3D worlds without any programming skills at all.

My own research hopes to develop the next phase of human-computer interaction. We are monitoring people’s brain activity in real time and recognizing specific thoughts (of “tree” versus “dog” or of a particular pizza topping). It will be yet another step in the historical progression that has brought technology to the masses – and will widen its use even more in the coming years.

Reducing the expertise needed

From those early computers dependent on machine-specific programming languages, the first major improvement allowing more people to use computers was the development of the Fortran programming language. It expanded the range of programmers to scientists and engineers who were comfortable with mathematical expressions. This was the era of punch cards, when programs were written by punching holes in cardstock, and output had no graphics – only keyboard characters.

By the late 1960s mechanical plotters let programmers draw simple pictures by telling a computer to raise or lower a pen, and move it a certain distance horizontally or vertically on a piece of paper. The commands and graphics were simple, but even drawing a basic curve required understanding trigonometry, to specify the very small intervals of horizontal and vertical lines that would look like a curve once finished.

The 1980s introduced what has become the familiar windows, icons and mouse interface. That gave nonprogrammers a much easier time creating images – so much so that many comic strip authors and artists stopped drawing in ink and began working with computer tablets. Animated films went digital, as programmers developed sophisticated proprietary tools for use by animators.

Simpler tools became commercially available for consumers. In the early 1990s the OpenGL library allowed programmers to build 2D and 3D digital models and add color, movement and interaction to these models.

Inside a CAVE system. Davepape

In recent years, 3D displays have become much smaller and cheaper than the multi-million-dollar CAVE and similar immersive systems of the 1990s. They needed space 30 feet wide, 30 feet long and 20 feet high to fit their rear-projection systems. Now smartphone holders can provide a personal 3D display for less than US$100.

User interfaces have gotten similarly more powerful. Multitouch pads and touchscreens recognize movements of multiple fingers on a surface, while devices such as the Wii and Kinect recognize movements of arms and legs. A company called Fove has been working to develop a VR headset that will track users’ eyes, and which will, among other capabilities, let people make eye contact with virtual characters.

Planning longer term

My own research is helping to move us toward what might be called “computing at the speed of thought.” Low-cost open-source projects such as OpenBCI allow people to assemble their own neuroheadsets that capture brain activity noninvasively.

Ten to 15 years from now, hardware/software systems using those sorts of neuroheadsets could assist me by recognizing the nouns I’ve thought about in the past few minutes. If it replayed the topics of my recent thoughts, I could retrace my steps and remember what thought triggered my most recent thought.

With more sophistication, perhaps a writer could wear an inexpensive neuroheadset, imagine characters, an environment and their interactions. The computer could deliver the first draft of a short story, either as a text file or even as a video file showing the scenes and dialogue generated in the writer’s mind.

Working toward the future

Once human thought can communicate directly with computers, a new world will open before us. One day, I would like to play games in a virtual world that incorporates social dynamics as in the experimental gamesProm Week” and “Façade” and in the commercial game “Blood & Laurels.”

This type of experience would not be limited to game play. Software platforms such as an enhanced Versu could enable me to write those kinds of games, developing characters in the same virtual environments they’ll inhabit.

Years ago, I envisioned an easily modifiable application that allows me to have stacks of virtual papers hovering around me that I can easily grab and rifle through to find a reference I need for a project. I would love that. I would also really enjoy playing “Quidditch” with other people while we all experience the sensation of flying via head-mounted displays and control our brooms by tilting and twisting our bodies.

An early, single-player virtual reality version of ‘Quidditch.’

Once low-cost motion capture becomes available, I envision new forms of digital story-telling. Imagine a group of friends acting out a story, then matching their bodies and their captured movements to 3D avatars to reenact the tale in a synthetic world. They could use multiple virtual cameras to “film” the action from multiple perspectives, and then construct a video.

This sort of creativity could lead to much more complex projects, all conceived in creators’ minds and made into virtual experiences. Amateur historians without programming skills may one day be able to construct augmented reality systems in which they can superimpose onto views of the real world selected images from historic photos or digital models of buildings that no longer exist. Eventually they could add avatars with whom users can converse. As technology continues to progress and become easier to use, the dioramas built of cardboard, modeling clay and twigs by children 50 years ago could one day become explorable, life-sized virtual spaces.

 

Die menings vervat in WERD OM TE WEET is die van die outeur/bron en nie noodwendig die van die Noordelike Helpmekaar Studiefonds nie