Human-Centric Organizations in an Era of Audacious Bots

Written by Neeti Mehta in Changing the world with automation on October 31, 2017

Editor’s note: a condensed version of this blog originally appeared here on VentureBeat.

We live in exciting times of unprecedented change and unparalleled human progress. Automation has played a significant role in propelling this era of remarkable advancement. In just the last 24 months the worlds of RPA, AI and cognitive machine learning technologies have collided and robotics is in the real-world hands of businesses leaders, with companies already working with hundreds of bots as a part of their digital workforce.

Digital transformation is the new status quo. At the same time as we focus on cutting costs, delivering error-free transactions, creating new service models, etc., there are our human workforces –our people – to consider.

That puts business leadership on the frontlines of an urgent discussion on the ethical dilemmas surrounding the automation of work which has, in the past, been done by human workers. This new discussion is about how to deal with human dignity and respect, human jobs, and, well, human-ness.

To talk about the future, we must talk about humanity and human value. History will judge us by the difference we make in the lives of humans. This is something we must talk about – the challenges, benefits, but also, or perhaps more importantly, the responsibility leadership has in creating this human-robot world of work – in embracing the audacity of bots.

This is part II in a continuing series (Part I: Embracing the Audacity of Bots), discussing ethical considerations for the new blended human-robot workforce. In this installment: The question of human value(s).


“I THINK THEREFORE I AM”

“Bot Audacity” is the prevailing fear that robots promise to self-learn, out-perform humans in certain behavior, perform cognitive functions, and think – the most human of skills. Descartes said: Cogito ergo sum, Je pense, donc je suis. I think – therefore I am. Is it any wonder the human workforce is worried? Robots are now posing what feels to be a very real existential threat.

The discussions continue about jobs and reskilling, about what achieving progress involves. Most likely you’ve discussed how we are all better off with tractors and steam engines and computers – and automation. That however is not the path of this discussion.

I would like to start a dialog on corporate leadership in this era of audacious bots. To spark questions and spur an exploration to which every leader must contribute to.


LEADERSHIP’S MORAL MANDATE

Ken Goodpaster, Koch Endowed Chair in Business Ethics, pioneered approaches to corporate responsibility. This is not “do good” outreach as defined by corporate-social-responsibility efforts by companies today. This is more akin to what many mission-driven organizations are focused on when building a company that operates at all levels with the mission in mind, holding themselves accountable on every input and output, including the social impact. Goodpaster espoused a model for bringing the externality of moral value into the economic formula for an organization. Put simply, there are three core elements to this leadership approach:

  1. Orient the company toward important moral values

  2. Embed values in processes and practices

  3. Make these values an enduring part of the firm’s identity

To put this in context of audacious bots:

How will a human-bot workforce evolve the value systems, processes, and practices of your organization?

What questions must we ask in determining the “moral values” towards which the digital enterprise will orient towards? Do we tie robot success to human success? Do we define a “good” robot as one that better enables a human? Or that serves an external goal?

As leaders, we are tasked with maintaining the balance between not only what is best for our customers, employees, shareholders, but also of society as a whole.


THE ETHICS OF THE FUTURE OF WORK

A robotic, digital enterprise is economically desirable. However, opening the moral discussion on this is not only more than fair game, it’s essential as we face this potential pivot point that will determine the trajectory of our collective digital future.

How do we as leaders weigh in on the ethics of this future of work, this amalgamated human-bot workforce? Is this path we have chosen not just economically viable or desirable but are our decisions during this journey morally justified?

The foundation of modern economic and legal approaches, as systematized by Jeremy Bentham, is the concept of maximizing utility. Theoretically, we can measure the amount of collective pleasure/pain that would result from a particular action/decision to determine whether the deed was morally right or wrong. This approach has allowed us – especially with controversial decisions – to justify the ethics of making that decision. Simply put: If a decision maximized pleasure and/or minimized pain not only for ourselves but for others as well, then it was morally right.

Again putting this in context of our audacious bots, take a moment to assess for yourselves if we and our organizations serve humans better:

  • By enabling…the human workforce with bots to take over repetitive mundane tasks

  • By advancing….human progress by making more things possible?

  • By reskilling…and creating jobs that we didn’t know existed?

Consider these questions along two axes:

  1. Can or will bots deliver a better future for companies? Will a human-bot workforce enable us to better serve our corporate agenda?

And a further step to the wider view of humanity as a whole:

  1. Are we enabling our humans to be better off – collectively, in the long-term, with these new ways of work? How would you answer?

My answer to both these questions is “Yes.”

But –

Only if we deliver with a consistent focus on humans.


FUTURE ETHICAL SUSTAINABILITY OF HUMAN VALUE(S)

As we embark on this chapter in the journey of progress, I would like to leave with you a concept. I urge you to keep this at the forefront of your strategic decisions.

The future ethical sustainability of human values.

It means two things:

  1. Continuing to value humans and their contributions.

  2. Sustaining the values that define us – that make us an advanced species: Empathy, Kindness, Joy, Respect, Grit…

Are we able to sustain the value we place on humans and their contributions, embodied by the traits that make us human? Think of the tenacious scientist you employed who never gave up on a finding a cure, or the employee who stepped up to help his team succeed, or worked through the night to ensure a timely delivery to a customer (after all they are not 24X7 robots).

Are we able to sustain these two imperatives for us and our workforces?


I encourage leaders to start this dialog in their own organizations, their own ecosystems – to find answers and perhaps more questions. Here are some to get you started:

  • Will bots enable us to better serve our customers, employees and society as a whole?

  • How do we ensure humans are better off collectively in the long-term with these new ways of work?

  • Are we continuing to enable our new human teams to succeed?

  • Are we helping reskill our displaced human workforce?

  • Are we encouraging human behavior and organizational processes to be human value centric and human centric?

  • Do our organizational goals and KPIs help us focus on our human-ness? What does this mean for the organization? Well, what metrics are in place that value human contribution? KPIs that measure human-ness? What gets measured will define what we improve/focus on.

If every one of us were to evaluate and orient our organizations to this future ethical sustainability, I have no doubt we will be successful in this chapter of human progress and history will witness a betterment of human life.