Architecture industry is about to undergo a very radical shift in how they make things. In the near future, engineers and architect will be able to create buildings, and cities in virtual reality (VR). In predicting VR’s dramatic evolution, an analogy to early cinematic history is apt. When the motion-picture camera first came out, actors were filmed on a set with fake trees. Then someone said, “Why don’t you just put the camera in the forest and then shoot?” Simple, but game-changing. VR technology is already available, and it’s only a matter of time before it is used to its full potential.
What’s Here Now: Visualization
At a dedicated Virtual Reality station inside the Los Angeles office of John A. Martin & Associates, people strap on eye-tracking headsets and navigate using handheld controllers through 3D models created by BIM software. Visualizing a design in this way lets users detect structural irregularities they might otherwise miss.
For example, in VR, you can see if a beam is not properly connected to a girder. Sure, this is possible without a VR headset. But being completely immersed in a 3D environment makes you feel as if you’re standing in that actual physical spot. It’s easier to detect building components that’re not in the correct location.
VR has made great strides as a visualization tool. It has dominated architecture, engineering, and construction industries—both from within and for use with clients. Using handheld laser point-&-click controllers, engineers and designers can move through 3D building renderings as if they’re in a first-person video game simulation. They can float up staircases and teleport down hallways, or peer out of upper-story windows. It is truly amazing.
Design visualizations can also help firms sell ideas to stakeholders. By deploying 3D building models as playable “games” with VR-capable software such as Tekla, Revit Live, 3ds Max, and Enscape, designers can invite clients and owners into immersive showcases of their prospective projects.
What’s Coming: Creation
Still, these examples only scratch the surface of VR’s potential. The next big opportunity for designers and engineers will move beyond visualization to actually creating structures from scratch in VR.
Imagine VR for Revit
What if you could put on an eye-tracking headset. With the movement of your hands and wrists, grab a footing, scale a model, lay it out, push it or spin it, and change its shape?
That scenario may not be far off. Programs like Google Tilt Brush, which lets you paint in a 3Dimensional VR environment. It’s a signal for what’s coming in creating design projects in VR. Just by rotating your wrist in the painting tool, you can color an object in a VR environment. This kind of physically responsive design functionality is not available in the VR platforms used by most architecture firms. But its existence outside the industries suggests it could migrate.
There are 3D mesh and surface modelers that allow designers to form smoothly curved, organic shapes—car bodies, canopies, and the like—but they are made on a 2D screen using tedious mouse movements and keyboard commands. To manipulate nodes and lines, users pull and drag cursors—a clumsy way of doing things in an age of VR technology.
What if designers could create directly in VR, rather than using external desktop software. They could peer around rear walls and teleport to tight spots, such as joints and moldings. By working at a closer and more maneuverable range to objects, designers could create more organic shapes. The can also create a higher level of granular detail. Artists and artisans learned a long time ago to use their hands to sculpt with stone and clay—and while that ability doesn’t directly apply to the realities of designing things like buildings and cars, there’s an opportunity to bring it back in a virtual way.
What Needs to Change: Interactivity
Before VR will see any widespread adoption as a creation tool in the architecture industries, the software must make a significant leap forward. As it stands, most game-engine technology allows users to only look around, not touch objects or edit on the fly. For example, if you’re viewing your model in VR and you want to make a beam correction, you must take the headset off, set it down, find the beam in the authoring software, make the change with a mouse & keyboard, update the model in the game-engine viewer, put the headset back on, and make sure the change happened. That workflow is long and tedious.
The future of VR needs to move beyond taking the VR headset off and relying on mouse-and-keyboard clicks to make changes. Architecture & manufacturing design software should take full advantage of VR’s handheld controllers and immersive environment, as well as provide tools within the experience to interact with and make changes to 3D models.
Another stumbling block is the lack of automated interactivity inside VR. Any action a user might take in VR—move a beam, open a window, or turn on a light—must be pre-programmed by an experienced game-engine programmer to make it interactive. A better solution could be to automate this process. For example, the Revit 3D model could be automatically converted into a game-engine environment that is VR capable, with interactivity already programmed in, so anytime a user wants to move a wall and open a door, or flex any type of component within the VR environment, it’s possible.
Information modeling is like a living, breathing thing: A building, door, window, table, or piece of medical equipment all have flexibility in their parameters. In most game-engine based technologies used today, these elements are static—for now. VR is about evolve. Are you ready?