With rapid advances in AI, people tend to worry—or sometimes panic. The ethical implications of AI in retail can often feel daunting and challenging to tackle, and the path forward—for both customers and retailers—isn’t always clear. But more sober-minded retailers need to understand and work to protect what most consumers are really concerned about: data security.
While data security is nothing new, the growing use of AI across every industry amplifies how necessary it is to protect customer data. And if retailers want to take advantage of the powerful headless commerce model, they need to ensure that they’re prioritizing the topmost concerns among consumers.
Data security will be mission-critical as the use of AI grows more prevalent.
As the number of viable business use cases—and thus benefits—of AI in retail grows, so too do challenges and risks. Customer concerns become retailer responsibilities, of which tight data security is top of mind. Avoiding data breaches amid increases in cybercrime will be ever more paramount to companies that want to keep consumers happy.
Businesses are gathering vast and increasing amounts of data while expanding their capabilities with powerful AI. But they’re also mining this data for highly specific insights relevant to almost any required use case. This highlights the growing potential of large language models (LLMs) of which retailers can take full advantage—so long as they properly position themselves. Data equals power, so it has to be the right data.
Data breaches have always been a concern for retailers, but AI increases the risk level. Both retailer—and third-party vendor— LLMs need rigorous safeguards to protect against data breaches. Large amounts of data generated by LLM prompts shouldn’t be shared with outside parties, often including software providers.
Additionally, retailers need to be prepared for the implications of AI hallucinations—when the technology fabricates information under the guise of truth. AI hallucinations can pose a range of problems: from misleading a customer with poor product recommendations to prompting returns and exchanges despite a transaction not meeting a company’s return policy qualifications.
Software providers and retailers that rely on AI must both assume full responsibility for how these features impact customers.
What do consumers think about data security in AI?
Contact center agents are armed with the knowledge they need to address customer issues rapidly and to effectively keep both customers and retailers happy. But that doesn’t mean consumers are necessarily thinking about fooling bots—accidentally or intentionally—on returns processes.
Most customers understand at least the basics of AI, and around a third consider themselves knowledgeable according to a Talkdesk retail consumer poll about bias and ethical AI. But one highlight from the survey shows that over 70% of respondents confirmed that product recommendations made them feel like brands were tracking them, either by listening to in-person conversations or from their browsing histories. What this suggests is that brands aren’t doing enough to keep customer data secure—while AI just keeps growing more powerful.
Meanwhile, almost four-fifths of those surveyed said transparent and responsible AI use by retailers would enhance their trust in those retailers, and that they wanted retailers to explicitly seek consent to use customer data to inform AI models. Even more, almost 90% of respondents want this transparency enforced and nearly half were worried that less responsible AI use would make the shopping experience less personalized and inclusive. This is not an AI path retailers want to tread. Instead, retailers should inform customers openly and honestly about their ethical, responsible use of AI—to which data security is central.
It’s not just about what customers expect retailers to do, but what retailers should be doing.
The landmark California Consumer Privacy Act (CCPA) introduced in 2018 came hot on the trail of the security-focused General Data Protection Regulation (GDPR) originally published two years earlier by the EU, in 2016. While the baseline assumption of security hasn’t changed, the conversation has shifted: now it’s about consumer privacy.
The CCPA requires retailers to confirm with customers who asked whether their data was used. Unfortunately for retailers, they didn’t initially track this. But they’re now investing in customer data platforms (CDPs) so that they can better follow customer engagements.
The GDPR and Talkdesk Privacy Policy both inform the position of Talkdesk on cookies and using cookie-based data for advertising purposes. These kinds of statements are responses from retailers to customers who don’t want their data used for advertising or want more transparency about its use. This has had huge consequences for both the tech and retail industries.
Brands are figuring out how to develop programs based on the data that they can still get—so long as it’s strongly secured and strictly confidential, according to customer expectations. Ideally, this should happen before customers ask for it.
AI ethics in retail represents the next iteration of data privacy concerns for retailers.
Most legislation securing data standards is already in place. Conversations about ethics amid advancing AI are just an extension of that. Retailers’ requirements will shift and depend on their location as more government regulations controlling AI emerge.
Consumers want:
To know what retailers are using their data for.
Access to their own data.
To stop retailers from using their data, if they wish.
Layer on AI, and the foundation remains the same, but the question is framed differently. Retailers now have to ask whether customers are okay with using customer data for AI-generated recommendations, which are much more powerful when driven by whole customer databases.
Retailers didn’t have a choice to protect data when regulations and laws passed, and they don’t have a choice to secure data in the age of AI.
Source: Talkdesk
Comments