3 Reasons Why AI Needs Human-Centred Design March 28, 2019 RICCentre From a business point of view AI makes sense for companies wishing to achieve cost savings and efficiencies. AI can automate many tasks that are routine, tasks varying from customer service conversations to processing insurance claims, cashing checks and screening resumes. Machine learning AI applications range from self-driving cars and medical diagnosis to speech-enabled personal assistants. All this valuable human time is now free for more complicated, meaningful, or customer-facing tasks. Meaningful, more interesting tasks contribute to happier employees, less turnover, more creativity, higher productivity and improved work-life balance. If you are an AI application development company or looking to employ AI applications to scale, research finds AI applications work best when: human users find it relatable and relevant it isn’t inappropriate or annoying the technology doesn’t fail when transitioning from automation to human The above 3 criteria require human-centered design. Satisfy human users needs Mimicking humans is alienating to human end users. Without relevant context, responses are unrelatable. Either or both will result in poor adoption. Responses and decisions are only as good as the data that human beings gave the programming to use. “AI applications algorithms’ must reflect realistic conceptions of user needs and human psychology. Paraphrasing the user-centered design pioneer Don Norman, AI needs to “accept human behavior the way it is, not the way we would wish it to be.” This requires collaborative design where organizational goals and human and societal social context are built into the algorithms and workflows. Bias Language and context are very complex. If human user behaviour and societal biases are not taken into machine learning, application development and chatbot programming design consideration, automated algorithmic design can reflect and amplify undesirable patterns. The most publicized examples of undesirable patterns of learning in the past few years are Microsoft’s Tay and Hanson robotic’s Sophia. Both quickly displayed aggressive and biased behaviour. HR programming can unconsciously adopt and repeat pattern biases and attitudes unless there are potential bias pattern recognition safeguards in place that are consistently monitored. Groupthink that potentially polarizes and undermines critical thought is another pitfall of bias. Social media is a perfect example of bias perpetuation. Social media engines search for, group and promote like topics and comments. There are no opportunities for alternative viewpoints or news stories. Social media engine algorithms designed to capture, display and recommend all viewpoints of a subject would minimize the potential of groupthink and polarization. Transition In machine learning applications and predictive algorithm decision trees, an algorithm performs a task previously done only by humans. These tasks cannot always account for change in environment, nuance or complexity, regardless of how massive the data set. Examples of automation not working range from driver-less cars not responding to environmental conditions to direction apps that have you drive to a different address, 20 minutes out of your way. Dependence on automation has the potential to erode human skills needed to overcome these scenarios if we are not careful. Users need to stay vigilant when engaged in automated applications and maintain skill sets that can transition from automated to human use instantaneously. Automated algorithmic decisions should be constantly adjusted for environmental changes and new complexities. AI and human intelligence collaboration Together, AI Machine learning and human intelligence can provide tremendous benefits to users and companies if human design navigates technology outcomes. Information and workflow assumptions, complementary information and potential biases, business rules, guidelines and override capability for the end user need to be established, communicated, monitored and adjusted on an on-going basis. Satisfying human needs by human design will ultimately build trust and deliver a rich user experience, for higher adoptability and more work-life, balance. Learn about AI bias with RIC Centre RIC Centre is hosting an executive primer on artificial intelligence and machine learning for business leaders, executives and strategists on April 4, 2019. Learn how AI can be leveraged to grow your business. Register Here About the AuthorColleen Cronin President, InsightONEColleen Cronin is a volunteer advisor at the RIC Centre, and the principal of InsightONE, a marketing consultancy that delivers business results in 120 days. Her specialties are differentiation, strategic business and marketing planning and strategic partnerships. The majority of her clients have been in the technology, healthcare and media and entertainment sectors over the past 10 years. Clients who deliver human design in technology include Third Octet and partners Citrix and Microsoft.