Inflated Expectations Or Temporary Difficulties

What hinders the development of artificial intelligence?

Artificial intelligence is one of the most discussed topics in recent years. What is the volume of investments in technology? What barriers to the development of AI exist today and how this affects the market?

Since 2015, the world has seen rapid growth in corporate investment in artificial intelligence technologies. Thus, by 2023, more than half of companies (54.6%) claim that they are testing the capabilities of generative AI for their own tasks. And 18.2% have already implemented it in their activities. Despite the obvious increase in the pace of technology development, in 2022 a decline in investments was recorded for the first time. They amounted to $189.59 billion, which is $86.55 billion less than in the previous year. The interest of ordinary users also began to gradually decline. Thus, in June of this year, traffic to the ChatGPT website fell by almost 10% compared to the previous month.

Artificial intelligence is a blessing for many, both for individuals and for companies. In the gambling sector, for instance, AI is used both on online betting platforms but also by betters. If you go on, the AI tool can help you to get the best prediction on the next game of your favorite team. Perks are various, but not all is black and white. Let’s see the risks together.

Risks and Ethics

As AI becomes more ubiquitous, so do the risks. First, systems can make mistakes. In the first quarter of 2023 alone, 45 incidents involving AI were recorded. The most common incidents are road accidents involving unmanned vehicles, as well as the spread of false information.

Secondly, decisions made by AI can be difficult to explain. This is especially critical in areas such as medicine and justice. Moreover, it is no secret that algorithms can often “fantasize.” For example, some criminal cases compiled by AI are so believable that even an experienced lawyer can believe them. However, they have nothing to do with reality. This creates barriers to transparency and trust in the technology.

Of course, there are many reasons for the slowdown in development. From general market overheating to technological obstacles. But we can identify the main barriers to scaling the technology.

Lack of Data for AI Training

Developing and training AI systems requires the use of large amounts of data. This is a critical aspect of the development and application of the technology. Experts distinguish two types of information on which algorithms are trained. High-quality and low-quality. The first includes scientific articles, dissertations, research, and world fiction. The second category includes less reliable sources. User posts on social networks and comments on them. There are discussions in forum threads, entertainment articles in the media. According to experts, an acute shortage of the first type of materials will be observed by 2026. Moreover, erroneous, outdated and unreliable data already significantly reduces the effectiveness of AI.

Also, many data may not be available due to confidentiality. But also legal restrictions or monopolization of information by certain companies. Therefore, another side of the development of AI is regular copyright infringement. Thus, a massive lawsuit has already been filed against OpenAI, the developer of ChatGPT. The company is accused of violating the privacy of millions of people whose data it was used to train the chatbot. Therefore, many companies are moving along a more legal, but no less controversial path. Thus, in August, Zoom announced that it would unlimitedly use user content to train and test AI functionality. Users cannot refuse to provide their data to the company.

Therefore, the problem of lack of data for the development of AI remains quite acute. In some cases, especially in the medical and financial industries, information can be extremely sensitive. And access to it is limited by law and ethical considerations. Which makes it difficult for AI to penetrate these areas.

High Investments and No Guarantee of Payback

Developing advanced AI systems, especially in the areas of deep learning and neural networks, requires significant financial and technical resources. This includes costs for hardware, computing resources, data scientists and AI researchers. Investments in AI development can be high. But the return on investment and success of projects is not always guaranteed. This is especially true at the initial stage, when the technology is experimental and does not have direct market applications. Typically, such AI projects are focused on innovation and are not aimed at quick results.

Thus, the head of SoftBank has already lost about $150 billion over the entire period of his investments in AI. Another striking example is OpenAI. Every day, about $700 thousand is spent on supporting ChatGPT. At the same time, the chatbot has not yet brought the desired profit. Thus, in 2022, the company’s loss amounted to $540 million. But OpenAI continues to develop the product thanks to large investments, for example, from Microsoft. The company has already invested $1 billion in the startup and plans to invest another $10 billion.

South African business, for example, is still weighing the pros and cons when making decisions about investments in AI. Thus, domestic companies in South Africa want to implement the technology in the next five years. And approximately the same number do not plan to do this. In many ways, of course, the reluctance to use AI is due to the need for considerable investment. This is why collaboration between business, academia and government is so important today. It will provide joint funding for research and development, and will also help create incentives for innovation in the field of AI.

Third, AI systems can inherit bias from the data they were trained on. Thus, there are frequent news about discrimination in the conclusions of AI. For example, for this reason, Amazon closed a project that should speed up the hiring process. The algorithm coped with its task, but according to the results of its selection, male candidates were always preferable to women.

Therefore, businesses are forced to actively work on solutions that will help reduce risks. For example, OpenAi created the Hindsight Experience Replay algorithm. It is able to take into account its mistakes to avoid repeating them in the future. This does not exclude possible AI miscalculations, but it gives hope that their number will gradually decrease.

Control Over AI

In an attempt to ensure the safety of AI and data privacy, states are seeking to gain control over the development and use of the technology. Thus, the US President called on leading developers to follow the principles enshrined in the “Draft Artificial Intelligence Bill of Rights” and protect citizens from the negative consequences of the spread of AI. In the European Union, a set of rules for the technology is also ready and is called the AI Act. It obliges AI developers to respect copyrights and disclose what materials the algorithms are trained on.

Note that currently, there is no specific legislation in South Africa regarding AI. The Presidential Commission on the Fourth Industrial Revolution has recommended reviewing and creating policies and legislation. This, in order to empower stakeholders with responsible technology use.

Such quite logical actions by states regarding AI can significantly slow down the process of technology development. Business is extremely concerned about this. Recently, 160 top managers of the largest IT companies signed an open letter calling for a review of bills in the field of AI. Entrepreneurs fear that such initiatives will lead to a manifold increase in compliance costs and obstacles to the spread of technology. There are already such examples. In spring, Italian authorities limited the use of ChatGPT in the country due to non-compliance with privacy rules.

Therefore, the question now on the agenda is who should deal with the security of smart technologies. The main developers of AI propose to transfer regulatory issues to the business itself, and not to the state. To this end, OpenAI, Microsoft, Google and Anthropic, for example, have created their own organization that will deal with AI safety standards. However, it is clear that solving these problems requires open cooperation. And cooperation between business, government and society.

Recession or Calm?

Despite some signs of a decline in the trend, Stanford analysts are confident that today AI is at the stage of deployment. And the decline in performance is a temporary story necessary to stabilize the market after it “overheats.” In the near future, to overcome the above-mentioned barriers, businesses and states will increasingly seek compromises. They will create ethical commissions on AI, develop legislative frameworks, and implement programs to train the population in new technologies. And by 2025, generative AI will become as popular and commonplace as Microsoft Office products are today.