Over the last 13 weeks, the class for which I started this blog, ENP-162, has covered a whirlwind of topics under the broad umbrella of human-machine automation.
Some of them (like signal detection and GIS data) I’ve talked about here, and some I never even got around to discussing, like chatbots and the future of AI “friends”, or attention theory and why humans are *terrible* at monitoring tasks. (I may get back to those two here, we shall see!).
But for every question of automation, and really every interaction between humans and machines, I think the core of the issue comes down to another topic we covered: how do you decide how automated a system should be?
We could talk about this all day. For decades, even — and people have. In their June 2005 "Reviews of Human Factors and Ergonomics" article[1], Thomas B. Sheridan and Raja Parasuraman identify the debate over automation in a work approaching their third and fourth decades:
Automation engineers—at least many of them—have the attitude that if it is technologically, economically, and legally feasible to automate, then do it; it is an exciting challenge (Bainbridge, 1983).
To the extent a system is made less vulnerable to operator error, it is made more vulnerable to designer error (Parasuraman & Riley, 1997).
but there are also numerous articles published in the last five years giving advice on the same debate. I like this one because it's funny:
So should you stop reading this post, go buy a bunch of robots, fire all your employees, and automate your entire business? Probably not. (Mayville, HelloSign Blog)
Are we ever going to settle this debate? Probably not, but we don't really have to settle the question For All Time. We just need structure, so that the level of automation in a system is another lever to tweak when engineering a system.
I really like thinking about it in context of Sheridan and Parasuraman's Levels of Automation:
TABLE 2.1: A Scale of Degrees of Automation
The computer offers no assistance; the human must do it all.
The computer suggests alternative ways to do the task.
The computer selects one way to do the task and
executes that suggestion if the human approves,
allows the human a restricted time to veto before automatic execution, or
executes the suggestion automatically, then necessarily informs the human, or
executes the suggestion automatically, then informs the human only if asked.
The computer selects the method, executes the task, and ignores the human.
I've also seen this model credited to Sheridan with ten levels:
human is offered no assistance
machine offers a complete set of decision/action options
machine narrows selection down to a few options
machine suggests one alternative
machine executes next task if human approves
machine allows human a grace period to veto a decision
machine executes automatically and necessarily informs human
machine informs human only if asked
machine only informs human if it decides to
machine decides everything and acts autonomously
Either way, this allows us to think qualitatively about what we want the user experience to be, and turn that into the kind of quantitative data engineers love. It would be pretty cool to have the popular press pick this up:
Will it happen? Will the engineer's enthusiasm for numbers and labeling things convince the average consumer that they need to know the automation level of their kitchen appliances? Will we actually make educated decisions about automation and how to use it, rather than just shouting across the chasm between Team Automation and Team Human?
What do you think? Let me know in the comments!
[1]: Sheridan, Thomas & Parasuraman, Raja. (2005). Human-Automation Interaction. Reviews of Human Factors and Ergonomics. 1. 89-129. 10.1518/155723405783703082.
This is some good food for thought! I’m on board with Bainbridge and Mayville - just because you can automate, doesn’t mean you should. Automation bias is REAL, so it should at least be mitigated. I can also think of many mundane tasks that we technically can automate, but should we? For example, it reminds me of the conversation we had in class about a cooking robot. While it may be of value to some who do not care about the art of cooking or have busy schedules, it may detract from the romance and tradition of cooking for others. Maybe there can be an element of personalization in new human-machine systems that arise? Ranging from not automated at all…