Mikhail Solodovnikov

Ethics of AI – Balancing Progress and Responsibility

Mikhail Solodovnikov the ethical issues with AI and how companies can balance progress and responsibility when leveraging AI in the workplace.

AI is becoming much more prevalent within many industries, with almost $100 billion in revenue expected to come from AI software by 2025. However, many are wondering about the ethical concerns that businesses need to keep in mind when it comes to AI.

Although there are many issues and concerns raised by the use of AI, Mikhail Solodovnikov says that some of the key areas to be mindful of are privacy, how data is used, the development of bias in AI, how employment is affected, and the autonomy of these programs, especially when they are used to operate larger constructs like self-driving vehicles.

Learn more about AI and these ethical issues below, as well as how companies can begin to balance between progress and responsibility when leveraging AI in the workplace.

What is Considered AI

AI (artificial intelligence) refers to any machine or program that is able to mimic and simulate some extent of human intelligence. AI is any machine that essentially thinks like a human or has functions closely associated with human behavior, such as learning and problem-solving.

Where is AI Use the Most Prevalent

One of the most common functions of AI is machine learning. This is where a machine will learn and develop without human input, usually through the repetition of trial and error of a task, or through being fed a vast sum of information related to be used for its intended task.

Some of the most common places that people will experience AI are within their own homes or on their own devices. Those who use chatbots and virtual assistants such as Amazon’s Alexa or Apple’s Siri are frequent users of AI, and there is also the ever-growing platform of Chat GPT.

Industries also heavily rely upon AI, with the healthcare sector being the most prominent user of these technologies, where it helps with health record storage, robot-assisted surgeries, and virtual consultations. Other sectors using AI include marketing, logistics, education, and finance.

Current Ethical Issues Include


One of the ways in which AI is able to work most effectively is by using data, but in some instances, it could gain access to personal data that was not given with full knowledge or consent.

Companies using this data and incorporating AI for analyzing it need to make sure that they are operating with a full level of consent from the consumer, as well as creating algorithms that protect privacy, such as by decoupling personal information or identifiers from the data being used to anonymize it.


AI can also be capable of bias, which can affect the results of AI-based learning and programs. This usually is not a problem with the program itself, but rather the result of the dataset given to the AI to process and learn from.

Stereotypes can end up being reinforced within the AI’s programming as it makes assumptions using such data. For example, if the information given already contains a bias, AI will continue and perhaps even reinforce these influences.
Businesses using AI need to be sure that they are removing any bias from the material given to AI programs, as well as monitoring the datasets that the system automatically pulls information from.

Mikhail Solodovnikov


While it is clear that AI has been of benefit to productivity and therefore the economic growth of companies and industries that leverage it, there are still concerns regarding how advancements and wider use of AI may affect the human workforce.

While AI cannot substitute for all jobs, it does have a much wider impact on jobs that sit between manual or unqualified work, and jobs requiring a high level of skill or qualifications. This means that office workers, retail staff, or factory employees are the most likely to be at risk, should automation and AI continue to advance as it has been.


As AI becomes more advanced, including the advent of more autonomous vehicles and even weaponry, it starts to raise the question of accountability.

For example, should an autonomously driven vehicle get into an accident, who will be at fault? Would it be the person who programmed the AI, or is it the AI system itself that is in the wrong because of its learned decision-making?

Companies using AI, especially those utilizing highly autonomous systems, need to be continually aware of their AI programs. Even more so than a human employee, their impact and progress should be monitored to make sure that they are performing as required and expected, both for general productivity and safety.

By Mikhail Solodovnikov

Mikhail Solodovnikov