Profit Motive: Companies Now Using Artificial Intelligence To Increase Sales

Written By BlabberBuzz | Saturday, 09 April 2022 16:45
1
1K

LinkedIn, a division of Microsoft Corp., increased subscription revenue by 8% after equipping its sales force with artificial intelligence software that not only forecasts clients who are likely to abandon, but also explains how it came to that decision.

The approach, which was first announced in July and was outlined in a LinkedIn blog post on Wednesday, is a significant step forward in allowing AI to "present its work" in a useful way.

While AI scientists have no trouble building systems that generate accurate predictions on a wide range of business outcomes, they are learning that in order for those tools to be more useful to human operators, the AI may need to explain itself through another algorithm.

The emerging field of "Explainable AI," or XAI, has sparked significant investment in Silicon Valley as startups and cloud giants compete to make obfuscated software more understandable, as well as debate in Washington and Brussels, where regulators want to ensure automated decision-making is done fairly and transparently.

 WHAT IS TAKING SO LONG? TRUMP URGES SCOTUS MARSHAL TO LOOK AT REPORTER TO FIND THE LEAKbell_image

AI technology has the potential to reinforce societal biases such as those based on race, gender, and culture. Some AI researchers believe that explanations are an important aspect of preventing those undesirable results.

Over the last two years, consumer protection agencies in the United States, particularly the Federal Trade Commission, have cautioned that AI that is not explainable may be probed. The Artificial Intelligence Act, which includes the requirement that users be able to interpret computerized predictions, might be passed by the EU next year.

 NO PRIDE: 'ALL INCLUSIVE' SEATTLE PRIDE PARADE INCLUDES EVERYONE, EXCEPT GAY POLICEbell_image

Explainable AI proponents claim it has improved the effectiveness of AI applications in industries such as healthcare and sales. Google Cloud delivers explainable AI services that, for example, educate clients who are seeking to improve their systems which pixels and training examples mattered the most in guessing the topic of a photo.

 U.S. WASTING BILLIONS IN TAXPAYER DOLLARS BY NOT BUYING GENERIC DRUGSbell_image

However, some argue that the explanations for why AI anticipated what it did are too shaky because the AI technology used to interpret the machines isn't up to par.

Each stage in the process - assessing predictions, generating explanations, testing their accuracy, and making them actionable for users - has space for improvement, according to LinkedIn and others building explainable AI.

 MUST SEE: WAS NANCY PELOSI UPSET THAT WOULD-BE KAVANAUGH ASSASSIN FAILED? MANY BELIEVE SHE ISbell_image

However, LinkedIn claims that its technology has generated actual value after two years of trial and error in a relatively low-stakes application. The 8 percent boost in renewal bookings beyond typical growth for the current fiscal year is confirmation of this. LinkedIn did not provide a cash figure for the benefit, although it was considered as significant.

 MUST SEE: FED CHAIRMAN CONTRADICTS BIDEN ADMIN ON WHAT CAUSED INFLATIONbell_image

Previously, LinkedIn salespeople depended on their own judgment and sporadic automated signals concerning clients' service usage.

Now, the AI can handle research and analysis with ease. LinkedIn's CrystalCandle identifies previously ignored tendencies, and its logic aids salespeople in honing their techniques to retain at-risk clients on board and sell upgrades to others.

LinkedIn claims that explanation-based suggestions are now available to over 5,000 of its salespeople across recruiting, advertising, marketing, and education.

"It has aided experienced salespeople in navigating talks with prospects by providing them with particular insights. It's also aided new salespeople in getting started straight away "According to LinkedIn's director of machine learning and head of data science applied research, Parvez Ahammad.

LinkedIn was the first to make predictions without explanations in 2020. A score of around 80% accuracy shows the possibility that a client who is set to renew will upgrade, stay the same, or cancel. The salespeople were not completely convinced. When the chances of a client not renewing were as slim as a coin toss, the team selling LinkedIn's Talent Solutions recruiting and hiring software was unsure how to adjust their strategy. They began receiving a short, auto-generated paragraph in July that highlighted the factors that influence the score.

For example, the AI determined that a client was likely to upgrade because the company had added 240 new employees in the previous year and candidates had been 146 percent more responsive in the previous month. In addition, in the last three months, an index that gauges a client's overall success using LinkedIn recruiting tools increased by 25%. According to Lekha Doshi, LinkedIn's vice president of global operations, sales representatives now guide clients to training, support, and services that improve their experience and keep them spending depending on the explanations.

Some AI scientists, however, wonder whether explanations are required. They may even do harm by instilling a false sense of confidence in AI or causing design compromises that make forecasts less accurate, according to researchers. People utilize goods like Tylenol and Google Maps, according to Fei-Fei Li, co-director of Stanford University's Institute for Human-Centered Artificial Intelligence, whose inner workings are not well known. In such circumstances, extensive testing and monitoring have eliminated the majority of reservations about their effectiveness. Similarly, even if individual decisions are incomprehensible, AI systems as a whole could be considered fair, according to Daniel Roy, an associate professor of statistics at the University of Toronto.

According to LinkedIn, an algorithm's integrity cannot be assessed without first understanding how it works. It also claims that its CrystalCandle tool could aid AI users in other fields. Doctors may discover why AI thinks a person is more likely to develop a disease, and people could learn why AI suggested they be denied credit. According to Been Kim, an AI researcher at Google, the objective is that explanations will indicate whether a system matches with the ideals and values that one wants to promote.

"I view interpretability as ultimately enabling a conversation between machines and humans," she said.

MEMES


FLIPBOOKS