March 21, 2026

Intellectual Substitution. New bill on artificial intelligence strengthens the role of the state and implies "sovereign" neural networks

Russia has put forward a bill on the regulation of artificial intelligence for public discussion. The document introduces new rules for developers, businesses, and users, and also strengthens the state's role in controlling AI. If adopted, the law may come into force on September 1, 2027. More details about the bill can be found in the material "Novaya Gazeta Europe". Illustration: "Novaya Gazeta Europe". "It is planned that the document will come into force on September 1, 2027, establishing clear rules for developers, businesses, and the state. It will protect citizens from hidden manipulation and discriminatory algorithms," the Ministry of Digital Development stated. The key idea is to divide AI into several categories. These include "sovereign" and "national" models developed in Russia, as well as "trusted" systems that can be used in the public sector and critical infrastructure. The state is betting on technological independence: it is assumed that the development, training, and use of such models should occur exclusively within the Russian Federation, and the technologies themselves should rely on Russian resources and specialists. Spiritual Values The bill enshrines the idea of technological independence. It is assumed that for this purpose: all stages of development and training must be carried out within Russia; all developers must be Russian. It is assumed that "favorable conditions" for sovereign AI must be ensured in the Russian Federation, and specific requirements for such models will be determined by the government. Already, among the principles of regulation, a course towards "traditional spiritual and moral values" can be identified. "Ensuring the development, implementation, and application of artificial intelligence technologies based on values such as life, dignity, human rights and freedoms, patriotism, citizenship, service to the Fatherland and responsibility for its fate, high moral ideals, a strong family, creative labor, the priority of the spiritual over the material, humanism, compassion, justice, collectivism, mutual assistance and mutual respect, historical memory and continuity of generations, the unity of the peoples of Russia," the bill states. The principle of security must also be taken into account: "...prevention of threats to the constitutional order, defense and security of the state, technological independence of the state, life and health of individuals, business reputation and property of individuals and legal entities, individual entrepreneurs, the environment, as well as ensuring information security and stable functioning of informatization objects using artificial intelligence technologies." Photo: Grigory Sysoyev / Sputnik / Imago Images / Scanpix / LETA. Protection of User Rights If a company or service uses artificial intelligence to sell goods or provide services without human intervention, they are obliged to honestly inform the client. The same applies to more serious situations: if AI makes decisions that affect a person - for example, to approve a service or refuse it - the user must be warned in advance that it is the system, not a person, that is doing this. "Moreover, in some cases determined by the government, a citizen will have the right to refuse such an approach and request the service in the usual way, without AI. If a decision made with the help of AI (for example, in a government agency or state-owned company) seems incorrect to a citizen, it can be challenged - first without court, by simply filing a complaint. According to the bill, developers are obliged to create secure models: exclude the risk of discrimination, assess possible threats in advance, document the system's operation, and warn about inadmissible ways of its use. Operators who implement and use such systems must ensure their safe operation, regularly test them, track incidents, and, if necessary, stop their operation immediately if there is a risk of harm. Service owners must establish rules of use, prevent illegal use of technologies, and обязательно inform users that they are interacting with AI. In addition, they must implement restrictions to prevent the creation of illegal content using such services. If a company uses AI to provide services or sell goods without human intervention, it is obliged to notify about it. The same applies to cases where algorithms make decisions affecting human rights. In some situations, a citizen will be able to refuse the use of AI and request an alternative method of receiving the service. In addition, the right to challenge decisions made with the help of algorithms and claim compensation for damages appears. Content Labeling Special attention is paid to transparency. Any content created with the help of AI - texts, images, audio - must be accompanied by a special label. Large platforms (more than 100 thousand users per day) are obliged to control the presence of such labels and, in their absence, either add them or delete the material. Copyright Results created with the help of AI may be protected by law if they are original. At the same time, the rules for using such materials must be specified in the user agreements in advance. Responsibility for copyright infringement, as a rule, lies with the user. International Cooperation Russia plans to develop international cooperation in the field of AI, participate in standard setting, and promote its own technologies abroad. At the same time, control over cross-border technologies and data is maintained: their use may be restricted or prohibited. Role of the State The President determines the AI development strategy, the government forms support measures and regulates the industry, and specialized bodies implement technologies in management and control their application. Photo: Grigory Sysoyev / Sputnik / Imago Images / Scanpix / LETA. Infrastructure Development The document provides for support for the creation of data centers and supercomputers. Benefits, simplified procedures, and state funding may be introduced for them. In October, Vladimir Putin stated that Russia could create its own AI "only by relying on its culture, worldview, and traditional values," and "not by copying others' solutions, which leads to technological and ideological dependence." The bill proposes to divide neural networks into several categories and regulate them differently. So-called "sovereign" and "national" models - that is, developed in Russia - will receive state support. In addition, a separate register of "trusted" neural networks will be created: they can be used in government systems and critical infrastructure only after verification by the FSB, "Kommersant" notes. As Alexander Tyulkanov, a member of the artificial intelligence standardization commission of the French Standardization Association AFNOR, explained to the publication, this approach is generally close to international practice: for critically important areas, stricter requirements for AI are also introduced in various countries. However, the Russian model is characterized by stricter state involvement. In particular, in the EU, market access is often based on self-declaration or checks by independent organizations, whereas in Russia, state certification plays a key role. In addition, the bill provides for mandatory localization of data and infrastructure within the country for the use of such systems in the public sector. Tyulkanov also notes that in the European Union, unlike Russia, requirements are set out in technical standards. "In my opinion, this is bad, because specialists from the subject area know the level of technology best. The key should be not only the protection of AI systems from external influences, but also consumer safety. This is the advantage of the European approach," says the expert. According to Yuri Borisov, partner at the law firm Digital Analogue Partners, the current version of the bill needs serious refinement. " He notes that today there are three main models of AI regulation in the world - European, American, and Chinese, and the Russian document is a combination of them. For example, the European approach borrows the risk classification system and content labeling requirements. The American approach involves protecting the domestic market and technologies from foreign competition, which may lead to restrictions or strict requirements for foreign models. The Chinese model is manifested in the emphasis on the development of predominantly domestic solutions. As a result, the bill looks like a combination of different, sometimes contradictory approaches, and will likely be refined for greater consistency. The most questions from experts are raised by the idea that Russian AI systems should comply with values such as patriotism and mercy. "Certification by the FSB is also envisaged. And how the department will check how patriotic or merciful a neural network is is a very difficult question. Such abstract categories are currently beyond the capabilities of artificial intelligence," says Borisov.

Intellectual Substitution. New bill on artificial intelligence strengthens the role of the state and implies "sovereign" neural networks

TL;DR

  • Russia plans to regulate AI with a new bill, potentially effective September 1, 2027.
  • The bill promotes 'sovereign' and 'national' AI models developed within Russia.
  • It emphasizes traditional spiritual and moral values and national security in AI development.
  • Users will have rights regarding AI decisions, including the ability to opt-out and appeal AI-driven outcomes.
  • AI-generated content will require special labeling, with platforms responsible for enforcement.
  • The bill includes provisions for copyright of AI-generated works and international cooperation, but with data control.
  • Experts note the bill combines international regulatory models but questions arise about assessing abstract values like patriotism in AI.
  • State involvement, including FSB certification for trusted AI systems, is a significant feature.
  • Support for domestic AI infrastructure, like data centers and supercomputers, is planned.

Continue reading the original article

Made withNostr