The things we are most familiar with are not always designed that well. Interface design is about how information is organized and defines how humans approach a given device. Laptops, iPads, smartphones, dishwashers, kiosks, voting ballots with hanging chads, and election booths that cast a vote a candidate other than the one you voted for—all are opportunities for human interaction with UI design (or re-design). I think the challenge in UI design is the humans keep changing—we’re changed just by the action of interacting with a device, and then those changes affect what we expect from a UI, which demands that the UI change. At the same time, as users, we want to learn an interface and master it quickly—we don’t want that to change.
In Tim Todish’s design guidelines for smartphones, he makes the point that
It’s no longer just about the evolving power and capabilities of these devices. It’s about us and how we, too, are changing. The user’s expectation of a great experience is the new standard. It falls to us as UX professionals to apply our skills to make this happen on the vast array of devices out there.
First impressions, orientation, navigation, and rules of engagement are all critical to the user experience. In my experience, users will only tolerate a bad UI if it’s the only game in town, but at our current rate of technology development, we can expect that it won’t be long before someone else comes along with a better UI to replace the bad one. So, as designers we don’t get a second chance to make a first impression. The audience has to drive the design.
Remember that innovative features and cutting-edge design aren’t as valuable to users as we may think. Users are concerned with getting the information they need through a sometimes limited connection, or perhaps getting accustomed to typing on a screen without any tactile feedback. Not everyone has an iPad… yet….Talk to real people, follow common archetypes, and keep the context of your target users in mind.
Adaptive design and responsive design are two big movements in the industry right now. In his article Responsive & Adaptive Web Design, Jared Ponchot describes the difference between the two and summarizes how we got from the sites of the 90s to now. Panchot says the web itself began as a “responsive” thing, and in 1991, HTML offered a way to share content with “the masses” across a world wide web.
By 1996 we had Cascading Style Sheets furthering this idea of the separation of content from its presentation. By 1998 CSS2 came along and we even had “media types,” making the web even MORE “responsive” to varying contexts and uses. Finally, in 2001 Jeffrey Zeldman’s To Hell with Bad Browsers article on A List Apart put some real energy behind designing in this way, forcing browser makers to begin making browsers that more fully adopt these standards. By this point you would think that the web would have reached its pinnacle, and all websites would have embodied some glorious syndication of clean and sensibly marked-up content, digestable in an infinite number of ways by a growing number of devices.
But Ponchot also says that designers are control freaks. So:
As the web evolved into something that more and more businesses were using, more and more designers could get paid to work on making websites instead of brochures, annual reports, business cards and the like….Print designers…began diving into this new medium and trying desperately to bend it to their will, manufacturing an ever-more controlled and fixed medium like we were accustomed to designing for. Designers like myself, who began crafting websites without a clear understanding of the medium, created painfully horrible mark-up full of tables and spacer gifs in an attempt to achieve the printed-page-looking layouts we dreamed up. By the time we figured out CSS, the goals were still the same. Designers would brag about their ability to achieve “pixel perfection” via CSS, essentially boasting about their ability to make a fluid and flexible medium exactly match one that is completely fixed. This mindset within the design community may not have changed all that much yet.
As we talked about in class, designing for smaller devices is not about thinking small. Citing Ethan Marcotte’s article on responsive web design, Ponchot says responsive design is based on three tools: fluid grids, flexible images, and media queries. Responding by changing layouts rather than with a fluid grid and flexible images isn’t responsive web design—it’s adaptive web design when the site responds without fluid grids and flexible images.
Responsive design is about creating smart web sites that, through a series of queries and tests in the code, figure out what device they’re being viewed on and adapt the UI to that device. If you view the site on a laptop you get one version of the information, but if you view it on a smartphone, the site culls the available information down to only what is essential and can be viewed on that screen size. Social Driver is a resource for inspiration with its list of the 20 best responsive web design examples of 2012.
In class, we talked about the fact that apps will eventually be unnecessary. With HTML5 and CSS, there’s no reason a web site can’t function as an app when it hits your phone, but 4G is coming—Wifi everywhere at 4GB per second—so everything about the site will have to be faster to respond.
We also talked about the fact that there’s no need to learn something that’s dying—like Flash. Right now, everyone wants to be the app, not the new app, but with continued evolution in responsive design, apps themselves may be obsolete by the time we finish school. But as designers, we do need to know how humans look at things, how they interact with the device. For instance, not everyone uses icons, and younger users may not even be familiar with the concept of, say, a physical filing cabinet, so a file folder icon is meaningless. We’re designers, but we’re also users, so when we’re designing interfaces we need to be both at the same time: When we experience a design we don’t like, we need to figure out why.
Touch screens are the big thing now, but there are considerations and possibilities we’ve overlooked. For instance, haptics—the physical aspect of the phone, the physical feedback—is where touchscreens fail. Haptics offer an opportunity for sound and vibration as feedback to the user. Design standards and conventions about the UI are just now emerging. We know swipe, scroll, and two-finger pinch, but not all of the gestures have been standardized yet, and they change from platform to platform. As designers in school right now, we are coming in on the ground floor of this stage of evolution.