Kaspersky has presented its ethical principles for the development and use of systems employing artificial intelligence (AI) or machine learning (ML), reinforcing its commitment to a transparent and responsible approach toward technology development. As AI algorithms play an increasingly prominent role in cybersecurity, the principles set out in Kaspersky’s whitepaper explain how the company ensures its AI-driven technologies are reliable and provides guidance to other industry players on mitigating the risks associated with the use of AI/ML algorithms. The relevant discussion was initiated by Kaspersky as part of the UN Internet Governance Forum, currently taking place in Japan, bringing together world leading experts responsible for internet governance.
Kaspersky has been using ML algorithms, which are a subset of AI, in its solutions for close on to 20 years. Combining the power of artificial intelligence and human expertise has enabled Kaspersky solutions to effectively detect and counter a variety of new threats every day, with ML playing an important role in automating threat detection and anomaly recognition, and enhancing the accuracy of malware identification. To help drive innovation, Kaspersky has formulated ethical principles for the development and use of AI/ML, and is openly sharing them with the industry to build impetus for a multilateral dialogue to ensure AI is used to make the world a better place.
According to Kaspersky, the seamless development and use of AI/ML should take into consideration the following six principles:
- Transparency;
- Safety;
- Human control;
- Privacy;
- Commitment to cybersecurity purposes;
- Openness to a dialogue.
The transparency principle stands for Kaspersky’s firm belief that companies should inform their customers about the use of AI/ML technologies in its products and services. At Kaspersky, we comply with this principle by developing AI/ML systems that are interpretable to the maximum extent possible and by sharing information about the way our solutions operate and use AI/MI technologies with our stakeholders.
Safety considerations are reflected in a wide range of rigorous measures that Kaspersky implements to ensure the quality of its AI/ML systems. Some of these include security audits specific to ML/AI, steps to minimize dependence on third-party datasets in the process of training AI-driven solutions, and favoring cloud-based ML technologies with the necessary safeguards instead of the models installed on clients’ machines.
The importance of human control is explained by the need to calibrate the work of AI/ML systems when it comes to the analysis of complex threats, in particular, Advanced Persistent Threats (APTs). To provide effective protection against ever-evolving threats, Kaspersky is committed to maintaining human control as an essential element of all its AI/ML systems.
Another crucial principle is ensuring the right to privacy in the ethical use of AI/ML. With big data playing a vital role in the process of training such systems, companies working with AI/ML must take the privacy of individuals into account comprehensively. Committed to respecting the rights of individuals to privacy, Kaspersky applies a number of technical and organizational measures to protect data and systems, and ensures its users’ rights to privacy are meaningfully exercised.
The fifth ethical principle represents Kaspersky’s commitment to utilizing AI/ML systems solely for defensive purposes. By focusing exclusively on defensive technologies, the company is pursuing its mission to build a safer world and demonstrates its commitment to protect users and their data.
Finally, the last principle refers to Kaspersky’s openness to dialogue with all stakeholders in order to share best practice in the ethical use of AI. In this regard, Kaspersky stands ready for discussions with all interested parties, as the company’s stance is that it is only through ongoing collaboration among all stakeholders that we can overcome obstacles, drive innovation and open new horizons.