[A version of this post was previously published on Medium]
Once a month, I help bring together a small group of designers, engineers, statisticians, strategists, and anthropologists — to explore what the work of design and user research will look like in an AI-first world. Below I dig into some of the themes that came up around: design, tinkering, personal bubbles, and interference.
Design is a vague umbrella term, with many specializations.
Architecture, industrial design, interior design, communications design, interface design, service design — and the list goes on. Nobel laureate Herbert Simon described design as the act of “changing existing states to preferred states”, implying that you’re a designer when you arrange the items on your desk. There is a distinction to be made between design thinking (little “d” design) and design as a profession (big “D” Design). Buying a pipe snake from home hardware and attempting to repair your toilet does not make you a plumber. Similarly, Design is a professional trade that requires specialized training and technical skills that are acquired through experience, practice, and being coached.
As a Design Researcher, my role is to get into the shoes of the user, and to translate insights about this point of view into design opportunities. And, I’ve learned that designers have a very different way of seeing the world than me — one that has a bias towards making things tangible and of having a clear intent and message. I love sense-making with designers because they help me shift out of the murkiness to start testing assumptions and making tangible what I’m thinking.
Design responsibly, even for emergent futures.
When the markets crashed in 2010 due to high speed algorithmic trading, many said “well, who could have predicted that!” Designers of that system must acknowledge that, yes, that was a possibility. Part of the role (and responsibility) of design is to imagine possible futures as well as to prototype and test the outcomes of these futures, to stress-test the limits of those designs. This is not unfamiliar territory: we’ve designed for emergence before, and there are lessons we’ve learned. Take the K-12 factory model education system, a system that, when first designed, did not see their full results until 13 years later. Through prototyping and testing, we improved results and learned a few things. For example, when you add art and music to the curriculum, math scores improve — a surprising result at the time. As we design products and services with AI, we must leverage expertise and lessons learned from other disciplines. We can’t afford to get lazy.
How can we better understand the basics of AI — in order to gain agency over the conversation?
Right now, the majority of machine learning algorithms are being created by a small group of people, with a larger group trying to understand the field. There is a lot of jargon and literature available. But, part of the challenge is the black box nature of the internal mechanisms, which makes it hard to talk about without getting technical. At the dawn of the internet, HTML was a language that was accessible only to an elite few. Today, it’s a language taught to elementary school children, and the level of literacy in that language among adults is rising. How websites get composed is no longer as mysterious as it used to be. In the case of AI and machine learning, the language is math. It is much more complex than a programming language like HTML. Might having a basic grammar make AI more accessible? Is a basic grammar even possible with machine learning, given that the internal mechanism are more of system than a language?
Flip the order and start with technology.
Usually, design starts with research, then design, then technology. But with AI, starting with technology first means understanding the context in which you are designing. So much of design is about imagining possible futures. Yet, if we don’t understand the technology, it’s difficult to imagine what is possible. Right now, because designers can’t play with the algorithms, it’s difficult to develop that intuitive understanding of what machine learning would allow us to build. For example, this was the challenge that Airbnb solved by developing an internal design process for their product team. They were building a smart pricing feature for Airbnb, but the tech was complicated and overwhelming. The new design process helped the design team understand the technology and enabled the entire product team to create better solutions.
ON PERSONAL BUBBLES
Predictive analytics narrows our world view.
The majority of AI web tools and apps are there to make decisions easier for us. As Google and facebook learn our preferences, it shows us content that we’re more likely to click on. Providing us with fewer options is helpful when navigating a ton of information and pointing us in the direction we may want to go. But it also means our perspectives are not being challenged, and we are missing out on stumbling across content or ideas that are outside our norms. So, therein lies the paradox. We want to retain free choice and we also want AI to make choices for us.
Perhaps this also ties into people’s personal preferences: some people value diversity (and the creative ideas that come from it), while others value order and routine. And also, it comes back to context and tasks that AI is making decisions about. Choices about which groceries to buy are likely benign, whereas what news articles I’m reading can shape how I think, feel, and potentially vote. And a narrowed worldview gets especially dangerous when consumed content reinforces harmful messages towards certain groups. Because, the combination of highly curated content and the repetition of this content convinces us “what I believe is right”.
How can we protect space for human creativity?
Feedback from amplifiers is a negative side effect of getting too close with the mic. The natural reaction is to cringe and back away, try to make the noise stop. Unless you’re Jimi Hendrix and you make rock and roll with the noise of the speaker feedback. Human creativity has the ability to improve our lives and create delight by turning the bug into a feature. But machine learning data sets are so large, the cycles are so fast, and the learning is opaque. So then, how do we still allow for the delightful side effects of noise and bugs that enhance our lives?
How can we build scaffolding to eliminate bias?
Machine learning and deep learning algorithms require a lot of data. And, because humans are biased, data sets are also biased. For example, racial profiling still occurs at airports today. AI could help eliminate unconscious bias by controlling which elements are being used to select who is searched and making sure those don’t have anything to do with race. Creating automatic tests and remaining vigilant with monitoring the outcomes are another important step. These tests won’t predict everything, but they start to work at unpicking the systemic racism that continues to violates people’s rights today.