BY Fast Company 5 MINUTE READ

Can Artificial Intelligence (AI) ever be as creepy as your boss? Well, that depends primarily on how creepy your boss is. We know there are limits to AI’s “creepiness,” but it’s harder to say the same about humans.

Many people, perhaps even the majority of workers in the world, don’t have creepy managers. Some may even be quite pleased with their boss and see them as a prosocial and ethical, caring human being. To those individuals, it is conceivable that AI may at least seem more creepy than their bosses, as applications of AI to monitoring and managing workers’ behaviors—where they go, who they speak to, what they say on e-mail, and even what their voice and face do on video calls—appear intrusive at best, and creepy at worst. After all, this emerging area of workplace analytics and big data are called “surveillance AI” for a reason.

Not that our bosses should ignore what we do on the job, or that our work behaviors should in any way or form belong to our personal or private life. Since the beginnings of modern office work, it has been clear that the role of a manager includes keeping an eye on what their employees do (and fail to do) at work, but the general sense on surveillance AI is that it still feels a bit like a fascist Orwellian dystopia or a real-life episode of Black Mirror.

That said, people who are unfortunate enough to work for a creepy boss may wish they could replace that manager with AI or computer-generated algorithms. No matter how creepy AI may seem, we should accept the near-certain possibility that it may never out-creep human bosses when they are creepy.

AI DOESN’T BENEFIT FROM EXPLOITING VULNERABLE PEOPLE
Consider a simple example to illustrate this point. A traditional female taxi driver needs to put up with a sleazy manager who is often asking her out for a drink when she finishes her late shifts every night. Even though she has no interest in him, and is happily married with three kids (to whom she desperately needs to return home after ending work), she is forced to put up with this to keep getting enough work and not lose her job.

Now, imagine that same driver switching to a ridesharing business (e.g., Uber, Lyft, etc.) where her performance is exclusively monitored by software. It keeps track of the number of trips she completes, the money she makes, and customers’ reviews, comparing that performance to relevant benchmarks. Importantly, at the end of the day, the software doesn’t pressure her to go out for a drink.

Even the most invasive and intrusive form of AI is unlikely to engage in perverse or perverted managerial behaviors. AI doesn’t benefit from doing things that cross professional and personal boundaries or abuse managerial power to exploit employees. Say what you will about AI, but you are unlikely to be sexually harassed by an algorithm. It would really take enormous advancements in AI to create a degenerate algorithm that enjoys denigrating vulnerable people for its own sadistic pleasure.

Sadly, there are no logical reasons to expect that we will be able to stop humans from doing that anytime soon, even if organizations make progress controlling this after #metoo. It is still more likely that humans end up harassing algorithms (and falling in love with a chatbot) than the other way around.

When all of our work behaviors are treated as data and recorded for posterity, and we can access every single interaction between managers and employees, it will be much harder for bosses to get away with illicit or corrupt behaviors. Surveillance does have an upside. It has the potential to make workplaces more ethical, even if this is achieved by sterilizing the culture of an organization.

AI IS BETTER AT PREDICTING PERFORMANCE
Only AI can be trained to attend to all the possible indicators or “signals” that accurately predict an employee’s performance. You don’t need to be an expert in AI to understand the reason. If there are visible patterns between what employees do at work and the contribution they make to a team or organization, then there’s no better way to spot those patterns than through a computer algorithm. Note that this is exactly what human managers are meant to do already, even if you don’t have AI: namely interpret whether every observable employee behavior is likely to be good or bad for the organization, to either reward or sanction it.

When we talk about data-driven feedback, or constructive feedback from managers that allows employees to improve their performance, what we mean is: “Keep doing X,” “Stop doing Y,” “Don’t worry too much about doing Z,” etc. Some managers are very good at this, and they may even have a natural talent for it. Unfortunately, we cannot clone or replicate them. But the majority of managers struggle with this, especially if they are asked to do it with large, remote, virtual, diverse teams, and in a world that is increasingly more complex. So, even well-meaning managers may not have the competence to see exactly how the vast range of behaviors in their geographically distributed team comprising a collection of
unique and heterogeneous people, relates to team performance. And even if they have the competence, they will probably do better with the help of technology.

Back to the ridesharing example, can you imagine a human manager monitor even 10 Uber drivers for 10 days? Of course, it is essential that AI is trained to predict the outcomes that actually matter, i.e., not whether someone will get promoted because they sucked up to their boss or are part of the demographic “in-group” (male, middle class, white, etc), but whether someone will add value to the team or organization. So AI can be trained to attend to the stuff that matters.

AI CAN BE TRAINED TO IGNORE STUFF THAT DOESN’T MATTER
An algorithm is basically a formula–a recipe. So, given the right input, it will create the right output. If we teach AI to detect signals of competence, expertise, humility, integrity, people skills, and curiosity, while ignoring signals about gender, race, age, religion, class, or unrelated political events, we won’t just end up predicting performance better, but also making the system fairer, i.e., less creepy.

Human intelligence is extremely adept at learning things, which is why AI may never catch up with us when it comes to the breadth of knowledge and generalizability of expertise and skill set. However, humans are also terrible at unlearning things, especially compared to AI. AI can forget, it can be taught to ignore stuff, irrespective of the content of that information.

Conversely, no matter how much conscious or unconscious bias training humans undergo, they will never be able to suppress information about the person’s gender, age, ethnicity, class, or attractiveness, and refrain from making unfair and biased inferences about a person’s talent and performance based on these inferences. It would really take incredible advances to create algorithms that are capable of evidence-based and logical decisions but prefer instead to make irrational decisions in order to maintain high levels of self-esteem.

Even the most pessimistic and optimistic AI experts must surely agree on one thing: we will never have computer-generated algorithms that have a fragile self-esteem, and need to bring other people down to artificially inflate their weak egos and feel better about themselves. If it ever happened, then we may conclude that AI has managed to replicate human frailty rather than intelligence.

Even the creepiest AI will never be as creepy as a creepy boss. We don’t need AI to introduce bias and unfairness to the workplace, it’s already there. Even if AI doesn’t reduce or eliminate that, it is unlikely to make things worse, if only because the bar is quite low.


Article originally published on fastcompany.com