The challenge is putting people first in any and all AI projects. AI practitioners provide recommendations for building a people-centric, yet AI-driven culture.

As the people charged with designing, building, and deploying artificial intelligence — from data engineers to developers — recognize, AI is a powerful mechanism for amplifying human knowledge, skills, and efficiency. But how can AI proponents employ AI to fix a moribund or toxic corporate culture? That’s probably the most vexing challenge with AI rollouts.

Entrepreneurs and experts at the front lines of the AI revolution recognize this is a hurdle technology alone can’t solve, “AI cannot solve issues where there are already underlying problems, like a company’s culture or lack of trust from a customer base,” says Stephan Baldwin, founder of Assisted Living Center.  “These are fostered by principles that shape the everyday inner and outer workings of a company.”

One of the challenges, Baldwin points out, us “artificial Intelligence models act based on historical data, meaning they’re prone to biases that we humans had when gathering information. Sometimes, an automated process doesn’t take into account the people it governs.”

The challenge, then, is to put people first in any and all AI projects. AI practitioners make the following recommendations for building a people-centric, yet AI-driven culture:

Extend ownership and responsibility for AI beyond the IT department. AI needs to be an enterprise-wide initiative, with all parties involved. “Successful and productive deployment of AI is a cross-functional effort far beyond just data science,” says Dr. Michael Wu, chief AI strategist at PROS.  “Extended teams need to range from the technical side, involving IT and cloud operations for security and data governance, to the business side, involving change management, training for education, adoption, best practice.”

Recognize that AI is simply code. It is not some mysterious dark art capable of outsmarting humans. “AI is no longer magic, and enterprises now seem to understand this,” says Beerud Sheth, co-founder and CEO of Gupshup. “AI is not trying to replace humans but enable a more human-like conversation that has the power of automation and intelligence a machine could have.”

Target AI to areas where it is most impactful. The best parts of the enterprise to promote and launch AI varies greatly across industries, Wu points out. “But the common theme is that organizations must have a reliable source of clean and rich data as a by-product of normal business operations,” he says, “For example, companies with large support centers often keep a good operational record of the incidences and resolutions. Transaction data in sales organizations tends to be fairly clean as it’s required for good accounting practices. This data will continue to fuel their AI/ML as it learns. On the other hand, although marketing organizations also have a lot of data, they are often noisier and often require cleaning before they can be used in production AI and ML.”

Sheth sees the most activity within customer support, product discovery and employee-facing departments in customer organizations. “Considerable progress on language parsing and machine learning have enabled fast turnaround time for support queries,” he says. “AI based prediction and context management allow accurate discovery mechanisms to be exposed through simpler interfaces like chats. Machine Learning based cognition engines make query resolution and policy related support issue resolution accurate and easy to deploy on secure channels like MS Teams and progressive web apps.”

Investigate and push for the most impactful technologies. “Pricing optimization, predictive maintenance, and conversational AI technologies are most impactful because the data required to train and continue to fuel the them as they learn tend to be plentiful,” says Wu. “Their deployment also doesn’t require a major change in business operation. Also, since there are many vendors offering these solutions, the total cost of ownership is relatively low compare to the revenue impact these technologies is able to drive.” Sheth sees the most potential from multilingual NLP, machine learning and predictive AI.



Ensure fairness in AI through greater transparency. To gain acceptance and support for AI across the enterprise, the results delivered need to be as fair and as free of bias as possible. “Transparency and fairness are essential to the success of an AI because they generate trust by Informing both employees and customers about how they are being governed,” says Baldwin. “There are many examples of AI not functioning correctly, and as a company, the last thing you want is not being able to explain why a mistake happened.” Still, more needs to be done along these lines, says Wu. “Many industries starting to leverage AI are more focused on getting their AI to work and achieve positive ROI first with the limited data they have. For these industries, fairness is not an immediate priority, even though it’s routinely part of the corporate narratives. Although everyone talks about prioritizing AI ethics and fairness, not everyone takes subsequent action to combat bias.”

Encourage awareness and training for fair and actionable AI among IT managers and staff. IT leaders and staff should also receive more training and awareness to alleviate AI bias, Sheth urges. “AI is as good as the data we provide to it. Since humans are responsible for the training data, there is a good chance that our AI algorithms can be corrupted with human bias or reflect any kind of other unfavorable pattern detected over time. We can determine various models which can help in taking better and fair decisions but along with this business leaders should be aware of such challenges and take right decisions to help eliminate the bias with regards to data.”

Encourage awareness and training for fair and actionable AI at all levels of the organization.  AI may be an enterprise endeavor, but IT leaders can lead the way in ensuring that AI delivers as it should. “Training and education for IT leaders and staff is a good start, but often not sufficient,” says Wu. “Alleviating AI bias should be everyone’s job just like data security, as it’s akin to a company’s business ethics.”

At the same time, he adds, “employees often need to have some incentive to motivate them to exhibit new professional behaviors before they become second nature. These incentives don’t always have to be monetary-related. For example, enterprise gamification can be employed to drive awareness and interest in AI bias mitigation. It can be leverage within an enterprise to gamify awareness of the AI bias issue, drive positive behaviors that help identify these biases, and even crowdsource for potential solutions.”

Regular review of AI results is also mandatory for success, says Sheth. “In fact, this has been one of the hard-learned lessons for AI companies to always have humans-in-the-loop.” He recommends “regular reviews of randomly selected AI results, making sure all strata are adequately represented in random sampling. End-users may not always have time and inclination to give feedback for suboptimal AI results. Actively and regularly evaluate performance of your models. The feedback from reviewers is automatically fed back to the next round of model training. This practice keeps models from getting stale and irrelevant.”

This feature was originally appeared in ZDNet.

Previous The Machines Are Watching You: Top 10 Computer Vision Applications
Next Explainable AI May Surrender Confidential Data More Easily