Artificial intelligence is all the rage, but what might it mean for the experience industry? The first step is understanding what AI is — and isn’t. Here’s what you need to know about AI, machine learning, natural language processing, and much more.
It seems like you can’t watch or read anything nowadays without artificial intelligence or “AI” being mentioned about a million times, conservatively speaking. Sometimes, a wayward “machine learning” or “ML” might pop up in the same media. But what does any of this mean? What is generative AI? What even is artificial intelligence? Supervised? Unsupervised? Semi-supervised? NLP? The vocabulary becomes unwieldy pretty quickly.
We keep hearing how much AI will revolutionize work, healthcare, finance, literature, art, and everything else, including customer experience and employee experience, as if AI were some sort of otherworldly, omniscient technology. Some even proudly declare that AI will replace people in these roles. In reality, AI truly is a valuable tool that has already been used to enhance people’s productivity, but it is neither a panacea nor a replacement for humans. There are limitations, but you may not know that based on popular interpretation of AI.
I’m writing this blog for you — the non-technical person — to help you understand a little bit of what’s going on with AI and how it could impact CX and EX. No math degree required. I already did that one for you. So, let’s start at the beginning.
AI, Machine Learning, NLP, and Generative Methods – Oh My!
What is AI?
One big thing to note about AI is that it is more of a concept than a fully realized technology. Essentially, AI is the ability of machines or computer programs to think, act, and learn like a person — and that includes our ability to adjust how we do things based on new information, create truly novel work, and handle ambiguity. This does not exist. The human brain is a truly incredible thing and to create technologies that can do what we do is still a far-off future.
What is Machine Learning?
When people use the term “AI,” they’re almost certainly talking about a comprehensive set of machine learning (ML) methods that enable computers to “learn” based on patterns surfaced from big data sets. The two are used interchangeably and are often confused for one another, even in places where the distinction matters. To say AI doesn’t exist would be as correct as saying that it does. Confusing, right? To simplify things, Microsoft defines machine learning as “the process of using mathematical models of data to help a computer learn without direct instruction [which] enables a computer system to continue learning and improving on its own, based on experience.”
In essence, ML is made up of lots of complex mathematics and statistics applied to big data sets that let it find patterns, which mimics human learning. ML methods are typically split up into three classifications: supervised, semi-supervised, and unsupervised. Many solutions that use ML often will use a mix of all of these methods, attuned to the specific tasks they’re given. Even traditional examples of certain types of machine learning methods may be performed using different methods depending on the tool you’re using. Simply and quickly defined, there is:
Supervised ML uses labeled datasets – i.e., you provide labels as a human in the dataset saying how a given thing should be classified to train algorithms that predict outcomes. Examples: spam filters on emails, text classification, speech recognition
Semi-supervised ML uses a small amount of labeled data and a large amount of unlabeled data to predict outcomes. Examples: web content classification, image or speech classification, identifying anomalies in banking records
Unsupervised ML uses a large amount of unlabeled data to predict outcomes. Examples: Medallia’s Theme Explorer, recommendation engines, customer segmentation
Each of these methods aims to identify patterns that mimic human learning in different ways with the hope that it will lead to valuable insights depending on what kind of task and data one is dealing with. For example, imagine that you’re trying to recommend products to a certain customer. It’d make no sense to spend time labeling data for each customer, meaning that unsupervised machine learning is probably the best model.
The “mimicking” part is very important. ML – and therefore AI – requires a lot of input material to try and find patterns, which is then used to either provide insights into a dataset or generate content with pretty accurate results. The former situation is particularly valuable in text and speech analytics, both of which run on text and audio data.
What is Natural Language Processing?
At Medallia, when you use text analytics, speech analytics, or speech-to-text transcription, you are invariably using AI/ ML. We do this using Natural Language Processing (NLP), sometimes referred to in our industry as Natural Language Understanding (NLU). NLP/NLU is a subset of machine learning, meaning that NLP/NLU is a series of machine learning methods applied to natural, human language. Siri, Alexa, Google Home, and likely even your car use NLP methods to transcribe what you say into text. Large language models (LLMs) like OpenAI’s ChatGPT/GPT-3, BERT, Meta’s LLaMA, and Baidu’s Ernie are all NLP because they are concerned with learning from natural language to generate human-like text content.
What are Generative AI or Generative Models?
No generated content is truly nor completely novel. Nor are all generative AI/ML models only applied to text. Using ML methods to generate “new” or missing information is called generative AI, or generative models. Generative AI, as defined by IBM, “refers to deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on.” Because ML mimics human learning, it is limited by whatever dataset it was trained on in order to produce or generate some other output. These outputs can be pictures like with DALL-E, audio or text-to-speech like with Meta Voicebox, as well as analytics dashboards, text, and much more.
Should the experience industry be cautious about Generative AI?
The reason for the hype surrounding AI — especially since these generative models have existed for years — is because they are finally in your hands, the everyday user, and are valuable to your daily life.
However, all users of generative AI should also consider some of the risks of using these models, some of which are risks inherent in data analysis and AI in general. Specifically, it is vital that you know on which data any model has been trained to mitigate bias and other risks. Like all AI/ML models, biased, inaccurate, or malicious training data will result in biased, inaccurate, or malicious outputs. Incomplete data or data that unintentionally — or intentionally, in cases of maliciousness — over-represents certain groups can lead to generated outputs that reflect that data. Books like Invisible Women by Caroline Criado Perez and Weapons of Math Destruction by Cathy O’Neil are just two of many books written on bias in data analysis that demonstrate why this can be a major problem for AI of all kinds. Procuring and ensuring high-quality, well-managed, and well-maintained training data can help minimize some of these problems.
Additionally, generative AI can run into issues with intellectual property and plagiarism if it has been trained on that type of content. AI models do not know that they are being trained on copyrighted or properly attributed images or text found online. As a result, these models can end up generating “new” content that neglects to provide the proper open-source attributions or eliminate copyrighted data from its training data. Multiple lawsuits as of 2023 are pending on generative AI companies for specifically these kinds of concerns. It is therefore vital for every business to understand the data that their generative AI is trained on.
How can AI help customer and employee experience?
When people talk about why AI is useful, it’s usually in the context of automation. While automation is a big part of why AI is useful, it is not the automation by itself that is valuable. Automation by itself can be risky when situations change or when the AI might make a mistake, especially since no AI nor human being will ever be 100% accurate or successful in their decision-making.
That being said, a lot of businesses with customer experience, contact center, and employee experience teams are being asked to do more with less time and money. AI-powered automations can help your teams take huge volumes of feedback data and substantially reduce time to insight and increase employee productivity, all while improving customer experiences. Beyond using NLP to surface insights into your data, the future of AI in customer and employee experience is bright.
For example, automated summaries of calls could eliminate post-call notes for the contact center. Summarization of customer, employee, and business records could enable users to get a quicker understanding of the collective experiences an individual has had with your business and make decisions on that data. Determining those next best actions could be further informed by AI to personalize exactly the next steps you could take to improve an individual’s experience and satisfaction with your business.
AI for the everyday experience practitioner
You do not need to be a data scientist or statistician to leverage AI in your everyday life. Simply by using Siri or a platform like Medallia, you are leveraging AI to enhance your productivity. In the future, generative methods will increasingly become the engine to drive greater productivity and personalization in your workflows and interactions with businesses.
If there’s one message for you to take away from this blog, it should be: AI is not some scary new technology that is going to take your job. It’s a tool that will likely require consistent human assistance to be as effective as promised. That means people will need to be consistently monitoring and maintaining your AI models of any kind — in-house or open-source — to meet your compliance, quality, and regulatory requirements. Maintaining data security, training data transparency, and output quality will require people, from ethicists to data engineers. Ultimately, while usable AI/ML is a genuine breakthrough, we must be circumspect in its use at the same time that we enjoy the convenience it brings to our lives.
Source: Medallia
Kommentit