Below is Hardy Stevenson’s review of The Future Computed: Artificial Intelligence and its Role in Society by Brad Smith, President and Chief Legal Officer, and Harry Shum, Executive Vice President of Microsoft AI and Research Group.
It’s not often that global technology corporations – on the leading edge of societal and economic change – willingly choose to soul search on the implications of their actions. It’s refreshing that Microsoft has laid it out bare in their recent online publication, The Future Computed: Artificial Intelligence and its Role in Society.
The implications of Artificial Intelligence (AI) will be massive. We are already seeing huge economic and social gains in business and personal productivity and human advancement. But the gains are also associated with job loss, employee displacement and, if you can’t keep up with the skill requirements, workforce redundancy.
The 145-page work is in part an apology for the workforce disruption being caused by AI. It’s also a plea for studying and fostering solutions to the disruption that the AI-era will create (p. 80). The publication defends the ‘adaptation challenges’ that AI will cause to bring about a better society. Several times, Microsoft cites the effect of previous industrial revolutions as justification for the pain that society endures to become the next modern era.
What I found particularly important is that Microsoft shines a light on its values – ‘we believe in the democratization of computing’ – and elaborates on six principles it should respect in regard to AI actions:
- reliability and safety;
- privacy and security;
- transparency and
“These principles are critical to addressing the societal impacts of AI and building trust as the technology becomes more and more a part of the products and services that people use at work and at home every day.” (p. 56).
Is there more?
What I’d like to see is Microsoft deepen its thinking about how humans can build trust in AI. They state:
“Designing AI to be trustworthy requires creating solutions that reflect ethical principles that are deeply rooted in important and timeless values” (p. 56).
In my world, principles are important, but principles mainly define the sides of the road for defining what is morally acceptable action. Principles guide actions. Principles are not morally sound actions. What’s missing are answers to: What are those ‘timeless values’ that Microsoft refers to? (p. 136) What is the world that AI should be creating? What is the AI morality that guides wise and right action?
I’m impressed that, Microsoft isn’t afraid to wade into the discussion of some of the goals AI should have, such as eliminating disease, solving income inequality, ending hunger and alleviation of poverty; however, AI is not yet envisioned as a technology that will be taking a deep plunge into these waters.
Forward authors Brad Smith and Harry Shum preach that:
“The more we build a detailed understanding of these or similar principles – and the more technology developers and users can share best practices to implement them – the better served the world will be as we begin to contemplate societal rules to govern AI”.
Thinking about ethics relevant to a new technology means rolling up your sleeves. While Microsoft is off to a good start, more in-depth analysis would be welcome.