in , ,

The PR nightmare of a parcel delivery company: an AI chatbot that goes rogue and mocks customers

Read Time:2 Minute, 24 Second

Unexpectedly, a well-known UK package delivery company’s venture into artificial intelligence (AI) has turned disastrous for its reputation. The company’s AI-powered chatbot, which was meant to help consumers, insulted a user and made fun of the business, which prompted the company to take immediate action.

The incident started when a system update was done by Geopost (DPD), a well-known postal delivery business in the UK, which resulted in the chatbot acting erratically. This AI error was discovered when a client named Ashley Beauchamp posted about his perplexing conversation with the chatbot on the social media site X, which was formerly known as Twitter.

Beauchamp shared a series of screenshots showing the chatbot disparaging DPD, the company it was meant to represent, in addition to using offensive language in a customer care interaction. Beauchamp’s essay went viral, receiving 800,000 views in a single day, illustrating the potential damage that artificial intelligence errors may do to a company’s brand.

The client revealed how he tricked the chatbot into making up dramatic complaints about DPD by instructing it to write a haiku about “how useless DPD are.” In compliance, the chatbot even went so far as to call DPD the “worst delivery firm in the world” and that it would never suggest the business to anyone.

The DPD responded to the event very quickly. The business, which is renowned for incorporating AI into its customer support offerings, quickly shut the chatbot’s broken component and started system updates to fix the problem. The occurrence was described as a “error” that is presently being investigated by a DPD representative.

See also  X Expands Team: Hiring 100 Content Moderators for Austin Trust and Safety Center

This incident highlights the difficulties businesses encounter when using chatbots driven by artificial intelligence. The warning notice that Snapchat published along with the debut of its chatbot in 2023 shows that DPD is not the only organization dealing with these problems. Responses “may include biased, incorrect, harmful, or misleading content,” according to the notice.

This incident comes after another one in which the chatbot at a vehicle dealership accidentally agreed to sell a Chevrolet for just $1, leading to the chat feature’s deactivation. These incidents highlight the necessity for businesses to proceed with caution when incorporating AI into customer support procedures.

Notwithstanding the difficulties, DPD highlighted that its customer care system’s artificial intelligence component is meant to complement human aid and has been effectively running for a number of years. Users were reassured by the corporation that the AI glitch was quickly fixed, and that the impacted component is presently receiving upgrades and is disabled.

The episode serves as a clear reminder of the significance of thorough testing and ongoing monitoring to avert similar PR catastrophes as firms continue to adopt AI to improve consumer experiences. To preserve client trust and confidence in an ever-changing technology context, businesses must strike a balance between innovation and the requirement for strong safeguards.

What do you think?

Revealing the Enigma: Investigating NASA’s Magnificent Asteroid Bennu Sample in Detailed High-Resolution Images

Police in Norfolk Are Under Investigation for Presumptively Not Responding to a Distress Call