A business case for AI to tackle incivility in enterprise messages​

Banner Image - Tech for Inclusive workplace conversations (3)

Introduction

At ishield.ai, we are building AI SaaS products for inclusive representation in online content. This includes inclusive marketing, advertisements and inclusive conversations. Before we started building Dost (technology for inclusive conversations), we conducted an extensive research of 50+ global companies and research publications available in the public domain from world leading universities and companies. In the article below we provide our key findings that supported the business case for building Dost.

Inclusion & belonging drives growth

Every study of employee engagement found that “belonging” was the #1 driver of employee satisfaction (Qualtrics, Glint).  If diversity is the desired outcome, companies need to focus on actual behaviours of inclusion & belonging  (Josh Bersin). 

Research after research from AIMM, Google, Microsoft, Facebook  has established that inclusive representation in marketing, advertisement and collaboration increases the trust and purchase intent of a brand. Studies from McKinsey, Deloitte, and Catalyst show that diverse boards outperform, diverse teams outperform, and companies with strong DEI brands are more profitable and market leaders

The degree of inclusion in a company is reflected in the conversations employees are having with each other, or the customer care conversations, or marketing campaigns and messages. 

However, growing incivility and microaggressions in workplace communication caused by unconscious bias and further accentuated due to remote working creates barriers to inclusion.  

Growing incivility creates barriers to inclusion

According to a recent McKinsey report, incivility had already doubled in the last two decades before COVID-19, and some call it endemic. Now, some 95% of workers say they consistently experience incivility at work, but only 9% report it to management, according to workplace research by management professors Christine Pearson and Christine Porath, authors of the aptly titled book, The Cost of Bad Behavior: How Incivility Is Damaging Your Business and What to Do About It.  Among workers who’ve been on the receiving end of incivility:

  • 80% lost work time worrying about the incident.

  • 66% said that their performance declined.

  • 78% said that their commitment to the organization declined.

  • 25% admitted to taking their frustration out on customers.

  • Creativity suffers

  • Performance and team spirit deteriorate.

  • Customers turn away.

Finally, managing incidents in the aftermath of incivility is very expensive.

Tackling incivility needs to be proactive

According to a study conducted by Accountemps and reported in Fortune, managers and executives at Fortune 1,000 firms spend 13% percent of their work time—the equivalent of seven weeks a year—mending employee relationships and otherwise dealing with the aftermath of incivility. And costs soar, of course, when consultants or attorneys must be brought in to help settle a situation. Hence, there is a need to proactively put solutions in place to tackle incivility.

Some steps that any organization can take to tackle incivility is elaborated in the HBR article here.  Here we will focus on how technology can help identify incivility and nudge people to correct their behaviour with utmost discretion. 

People are concerned about any form of review of the content they post. However, in an enterprise context, the following applies to all the content that is posted:

  1. Users should not post messages that can exclude, isolate or insult others.
  2. After publishing, the content a user posts is owned by the company.
  3. The content a user posts is governed by their company policies.
  4. The content that a user posts on these platforms is already public (within the company atleast and sometimes makes it to media).
  5. Some companies have groups to monitor and address problematic content on these platforms.
  6. There is probably a grievance redressal mechanism in place to tackle workplace toxicity. 
  7. If required, a company can audit all the messages a user has posted.

Having said that, any solution needs to win the trust of people using it. While delivering on the organization’s inclusion goals, the solution should also be embraced by the people. We explore some basic tenets that a solution can follow which can achieve this.

Core tenets of a tech like Dost to build trust

For people to embrace any technology solution that reviews their messages and promotes inclusion, the key tenets are:

Assume positive intent: Most microaggressions in communication happen due to unconscious biases. Only in rarest of rare situations people consciously display aggressive behaviour in public platforms against their peers or customers. Hence, start with the assumption that people want to be good and do good. Hence, we observed that once made conscious about the potential harm in the content, users invariably take corrective action. This tenant also has implications on the ones that follow.

Be consistent: An always on solution, that identifies issues with a consistent set of criteria and flags them for what it contains. Hence a technology based review as opposed to human review.

Obsess about people’s privacy: Only the sender should be made aware of potential harm in their content, hence, the flag and nudge to correct their messages should only be visible to the sender and no one else.

Protect Data: Do-not store the messages, do-not share the messages with anyone in the company (not even the admin) and do-not build any mechanism to trace either the user or their message. 

Build a feedback loop: No technology solution is perfect, hence build a mechanism for users to voluntarily share their feedback for false positives and false negatives in the detection and flagging technology to improve.

Proactively audit the solution: Technologies that tackle incivility in workplace communication need to proactively invest in auditing their claims against the core tenets by an unrelated third party. The technology provider should also be open to sharing the audit findings with the organizations willing to use their technology.

User experience considerations

Helping people correct their messages for inclusion as these conversations happen has a far greater impact over other forms of offline interventions. Our research findings on user experience include:

1. Don’t create a barrier in publishing a message: We had to make a very difficult choice to decide when to review the message and the trade-off thereof. The table below captures the criteria

 

Before the message is posted

After the message is posted

Pros

Avoids embarrassing errors

Does not create any barriers of latency in communication. Still gives the users an opportunity to correct.

Cons

Induces latency in communication, especially considering that >99% of messages will have no issues.

Sometimes the damage is already done.

When we posed this choice to our user research group, the overwhelming majority (>74%) chose the option of “review after the message is posted”. This choice is also fully aligned with the core tenet of assuming positive intent. On platforms like Slack, this is also a platform induced constraint. 

2. Let the sender decide the action to be taken: Another very difficult choice we had to make was to decide what action should the tool like Dost take when it detects incivility. We asked people two questions:

  1. Should the tool only flag and nudge the user? OR
  2. Should the tool take any action, such as edit/redact/delete messages?

Again, the overwhelming majority (>90%) agreed that the tool should flag the messages to the user and the user will decide the course of action.

3. Be there where the conversations happen: Example: integrated into Slack, MS Teams, Zendesk, Salesforce Service Cloud, Chatbots, CRM software etc. The user should not navigate out to anywhere else. The flagging and nudge should happen in conversation.

Measure Impact

The success of adapting a technology like Dost to tackle incivility, can be measured in two ways:

 

  1. Trends of reduction in incivility: At an aggregated level, the volume and % of messages flagged as incivil, and the trends over a period of time.

  2. Data driven intelligence for specific training: Measure what type of incivility occurs more within the organization, so specific training and communication can be tailored towards addressing these. Example: we observed that “gender microaggression” is twice more frequent than “racial microaggression”. Such data can be used to focus on the issues that really occur.

About Dost

Dost means a friend. Dost is an AI driven bot that detects incivility in enterprise messaging and flags such messages to the user to take corrective action. Dost is available on Slack Appstore and will soon be available for MS teams. Dost API can also be used to integrate with any messaging, CRM, and CS software.

Comments are closed.

Thank you!

Click the button below to download the audit report.