Artificial intelligence is increasingly entering e-business. In today's publication, we will focus on identifying the risks associated with providing automated responses to customer feedback in the e-commerce industry. Bearing in mind that the AI sector is growing rapidly, this paper will describe the risks current as of the date of publication. It is obvious that AI algorithms are learning and adapting to ever higher market requirements every day, however, we are still and often without much problem able to feel what is the product of human labour and what comes from artificial intelligence. There is nothing wrong with this either, provided that it is not accompanied by negative feelings and emotions from our customers.

Automated response to customer feedback or comments - yes, but under certain conditions.

As customers of various types of shops offering products or services, we are positively impressed by the individual approach. What about the e-commerce sector? Here, too, services are competing with each other to improve the attractiveness of the respective e-shop, to increase the level of customer satisfaction, to increase the level of after-sales service, etc. It is good to receive a cultured comment from the administrator of the given e-shop, in which the question we posed is answered in a substantive manner. But what happens when we see that to a similar, but slightly different question (because the devil is in the detail), the shop administrator responds in the same way, using the same phrases, using the same emoticons, addressing us in the same polite way? It seems that in such a situation all the charm breaks, as we realise that we have been treated as if by a machine. There would be nothing wrong with that, either, if it were not for the fact that the response did not fulfil its primary role - it did not dispel the doubts raised in the query. At such moments, the image of the platform in question is lost in the eyes of the customer. The effect of using an automated response is then:

  1. inaccurately answering a straightforward question asked by the customer (of course, the content of the question is answered by the questioner, but it must remember that if the human does not understand the intent of the questioner, then it asks for clarification, etc.). In a situation where AI will nevertheless try to answer a question that its algorithm has not fully understood, then the results are frightening);
  2. templated provision of polite answers that are no different for each customer (all the charm associated with the individual approach that the e-platform offers is gone);
  3. loss of time by the customer because their problem has not been solved (in practice, one encounters an accumulation of loss of customer time precisely through the application of artificial intelligence to customer service - about which more later).

Artificial intelligence responds to customer comments and feedback and answers calls on the helpline.

In such circumstances, we can end up with a real accumulation of wasted customer time, which in extreme cases can end up with us losing the customer. Imagine a situation in which a customer asks in a chat or on a social media profile for a solution to a problem related to a purchase. He or she quickly receives an answer from the bot, which, however, has not resolved all the doubts the customer had. Worse, if the answer provided has misled the customer - in such a situation, there may be a circumstance of an infringement of the provisions of the Act of 16 April 1993 on combating unfair competition (Journal of Laws of 2022, item 1233) - by misleading the customer, which may influence his/her future decision on the purchase of goods or services. The customer then decides to call the shop to receive a full answer to his or her question about the product in question. During the phone call, he is informed that a bot is on the phone to solve his problem. The customer once again describes his problem, but does not receive an answer. Instead, he is told that he is being asked to ask his question in a different way because he is not understood by the bot's algorithms. As a result, the customer enters the same data once again, wastes time and, if there is no option to connect with a customer adviser - unfortunately, leaves unsatisfied. Worse, if he or she leaves forever, annoyed by the way the customer service is handled.

A lot of risks, a lot to fix, but everything is just moving in the direction of service automation.

Certainly, AI algorithms are getting better and having fewer errors every day. The question is, will the individual approach that is so strongly valued by customers be maintained? Will the algorithms develop solutions that are deceptively similar to the professional service of a salesperson committed to resolving all the customer's concerns? We will still have to wait for the full answer.

Legal basis:

Act of April 16, 1993 on combating unfair competition (Journal of Laws 2022, item 1233).