Why do they say that ChatGPT increases the risk in cybersecurity?

0
11
tec5.jpg
tec5.jpg

The artificial intelligence revolution (AI) has burst onto the technology scene with a bang, radically changing the security threat landscape. The rapid growth of AI-based platforms, such as OpenAI’s ChatGPT, has expanded the reach of this technology, but has also opened the door to sophisticated new risks.

Emerging Threats in AI

Attackers have begun using AI to improve their attack techniques. phishing and fraud, harnessing the power of AI language models to perform more convincing and effective attacks. In an alarming case, Meta’s language model, with 65 billion parameters, was leaked, a fact that will undoubtedly lead to more sophisticated phishing attacks.

Compromise of Sensitive Information

Users often feed sensitive data into AI/ML-based services, leaving security teams scrambling to control the use of these services. One troubling example is Samsung engineers who introduced proprietary code to ChatGPT to help debug it, leaking sensitive data in the process.

The Misuse of AI

The misuse of AI is increasingly on the minds of consumers, businesses, and even the government. As the AI ​​revolution rapidly advances, four main classes of problems have been identified that it brings with it.

Asymmetry in Attacker-Defender Dynamics

Attackers are likely to adopt and design AI faster than defenders, giving them a clear advantage. With AI, they will be able to launch sophisticated attacks on a large scale and at low cost.

The attacks of social engineering They will be the first to benefit from AI-generated text, voice and synthetic images, enabling the automation of phishing attempts that required considerable manual effort.

Loss of Social Trust

Social trust can be severely affected by the rapid spread of misinformation thanks to AI. Current AI/ML systems based on large language models (LLMs) have inherent limitations in their knowledge and when they don’t know how to respond, they invent answers. This phenomenon, often referred to as “hallucination,” can lead to inaccurate answers that erode users’ trust in the AI ​​and lead to errors with dramatic consequences.

New Attacks on AI/ML Systems

Over the next decade, we will see a new generation of attacks on AI/ML systems. Attackers will influence the classifiers that systems use to skew models and control outputs. They will also create malicious models that will be indistinguishable from real models, which could cause real damage depending on how they are used.

Large Scale Effects

The costs of building and operating large-scale models can give rise to monopolies and barriers to entry, which could lead to unpredictable externalities. Citizens and consumers will be negatively affected, and misinformation will become rampant.

The Future of AI and Security

We need more innovation and action to ensure AI is used responsibly and ethically. This also creates opportunities for innovative security approaches using AI. We will see improvements in threat hunting and behavioral analytics, but these innovations will take time and investment.

We are remarkably ill-prepared for the future of AI. However, it is important not to panic, but to take action now so that security professionals can strategize and react to large-scale problems.

Learn more at venturebeat.com

Previous articleApple: Japan is also ready to impose sideloading of applications
Next articleFired Amazon union organizer in Alabama reinstated after filing a complaint, union says
Brian Adam
Professional Blogger, V logger, traveler and explorer of new horizons.