AI is increasingly driving cyber attacks

Cyber ​​criminals win

As CFO points out, the main reason for the increase in cyberattacks is simple: the criminals are winning. Surprisingly, 47% of respondents said their company has a policy that agrees to pay any ransom demanded by cyber hackers, a 13% increase compared to last year. Last year, 42% of participants said they paid to have stolen data returned; in the previous year this proportion was 32%.

Ironically, this complacency towards cyber fraudsters does not guarantee any compensation: 45% of those who paid a ransom to cyber criminals saw their data exposed anyway. We understand why 62% of executive respondents say this Ransomware are their main concern.

One of the main advantages of generative AI is its extremely fast learning ability, which allows it to evolve and adapt in the event of cyberattacks to evade detection by traditional security systems. highlights an analysis from Dataconomy.

Fortunately, these same capabilities can be used to improve cybersecurity by analyzing massive amounts of data for trends and anomalies that could reveal an attack. AI-based protection systems can be designed to automatically and systematically respond to cyber threats, for example by shutting down infected systems or quarantining malicious files.

Widespread risks of AI

However, the risks of generative AI are not limited to cyberattacks, supports a McKinsey study. Other weaknesses include the lack of transparency and clarity, the introduction of bias and discrimination, and the lack of human oversight and privacy.

In the case of transparency, a lack of transparency about how a model was developed (e.g. how the data sets fed into a model were combined) or the inability to explain how a model arrived at a particular result can cause problems . This can have legal consequences; For example, if a consumer wants to know how their data has been used, the organization using the data needs to know which models the data has been fed into.

AI applications, if not properly implemented and tested, can exhibit performance issues that can violate contractual guarantees and, in extreme cases, jeopardize personal safety. McKinsey gives the example of a model that would have been developed to ensure the timely updating of machines in the manufacturing or mining industries; Failure of this model could constitute contractual negligence and give rise to a claim for damages.

The threats to privacy are numerous, considering that data is the foundation of every AI model. Privacy laws around the world dictate how companies can use data, while consumer expectations set the standard. Failure to comply with these laws and standards may result in harm to consumers. And even if the use of data is legal, improper use can damage a company’s reputation and deter customers.

Juliet Ingram

Total web buff. Student. Tv enthusiast. Evil thinker. Travelaholic. Proud bacon guru.

Leave a Reply

Your email address will not be published. Required fields are marked *