Toronto is poised to become a hot spot for the fast-growing field of AI. While most of the current practitioner level conversation is engineering-centric, we’re interested in the intersection of AI and design. How is AI shaping human experiences? What does the work of designers look like in an AI-first world?
In order to tackle these and other questions, I co-lead a meetup at my work called: design+AI. Design+AI brings together an early community of practitioners and thought leaders to share our state of knowledge with the goal of learning together, faster; preempting disciplinary isolation; and, creating meaningful communication across disciplines. Below is a distillation of the conversation at the inaugural Design+AI meetup.
What is AI?
AI is a loaded term. Machine learning and deep learning are more precise. We continue to develop machines that can do more and keep moving the bar on our definition of AI, making it difficult to pin down. And anyway, intelligence is a construct. Another way to think of AI is that it is an output of Machine Learning. The strength of AI, or level of its “intelligence,” is on a spectrum.
AI challenges our sense of control and self-importance.
We assume that AI is smarter than us, that it’s superhuman, that given time it will become independant from its maker. But AI is an artificial alien. It is a machine that is purpose built. This assumption of hostile takeover has more to do with our fear of change and our own hubris around our sense of importance on the evolutionary ladder than with a true threat.
AI forces us to self-reflect, and that’s both scary and difficult.
Humans are biased, and yet humans are behind AI training data. Because AI is a reflection of its training, AI can shine a light on the uncomfortable truth about the biases in our society (as we saw in the New Zealand and Paroleexamples).People often say “I have nothing to hide” and forgo their privacy. But AI looks for correlation, not causation; and gaps and biases in training data can result in unfair discrimination. For example, your facebook friends could hamper your ability to get a loan. As designers, it is our responsibility to prototype and test for these gaps, so that we don’t perpetuate or amplify systemic inequalities in society.
It will take time to acclimatize to AI.
We’ve gotten used to all kinds of game changing technologies. Airplanes, cars, walk lights, skyscrapers, iphones, to name a few. Now, we use these things without thinking twice. We can test AI so that it’s reasonably safe. For example, self driving cars are likely better drivers than any of us, given lidar sensors, 360° vision, and night vision. Over time, we will get used to AI, too.
Transparency is nice, but sometimes neither practical nor relevant.
We ought to have access to the rationale behind life altering decisions; that’s the thinking behind EU’s recent algorithmic decision making regulations. However, in reality these laws are shallow. Upon request, AI owners are only required to release a decision flow chart rather than a deep analysis of how a decision was made. That’s because, behind the curtain, there are thousands of variables (even millions with deep neural nets), each assigned a weight, that go into an AI output. They are “black boxes”; even if you look at all the math, there is too much data to wade through to understand what is going on inside of them. And anyway, in most cases we don’t have access to what’s behind the curtain. AI is often owned by companies who are not required to share their algorithms and not eager to give up IP. Furthermore, in the case of racial profiling, understanding how AI (or a human) arrived at a racist output is irrelevant — regardless of why, this kind of prejudice is unacceptable. Because we can’t understand all the layers of AI, we need early prototyping and testing more than ever before.
First came automation, now augmentation, then…?
We’re far from Terminator or sentient AI. In the meantime, AI can augment our abilities to do our work better. For example, judges could leverage AI to look through billions of civil code and parallel judgements to support them in making a more objective verdict. And even trainers and testers of AI reap benefits. AlphaGO’s trainer realized that his own playing ability improved during the process of training and testing the AI. But what do augmented experiences look and feel like to the person on the receiving and usage side? How do we design ethical, meaningful, useful, thoughtful interactions especially as we inch from augmentation to agency?
AI makes ethics murkier.
The Trolley Problem suddenly has real world implications thanks to self-driving cars. And, AI can predict which patients will develop mental health conditions, like schizophrenia, down the road. But is this helpful or harmful information? Designers, in collaboration with multidisciplinary teams, will need to unpack these and other messy questions. In doing so, designers must step back and continue to drill down on intent: What kind of world do we want to live in and how can machine learning and deep learning help us get there?