AI and Data Security

The world is becoming smaller; Are we going to feel more secured?!

Advancement in technology makes people to interact and connect with anyone in anywhere at any time. Globalization is accelerating, in certain way that today countries, people and cultures are getting more and more connected than ever.

Among such technology advancements, artificial intelligence is revolutionizing the things in a dramatical way, the World Intellectual Property Organization figures a triple increase in number of AI registered patents from 2013 to 2017 and according to Bloomberg Law's Global Patent Database, the number of AI-related patents has increased from 3,267 in 2017 to 18,753 in 2021 a sixfold rise!

This significant growth of patents can result in an explosion of AI-based products, apps and technologies that ultimately transform our daily life and human-machine interaction but more surprisingly it is not even a new trend, AI has always been around, it's just the evolutions that are occurring so fast. The question is:

Whether we really want AI to achieve that level of power?

To address this question, we may refer to an estimate of Forbes which shows AI, as the largest business prospect of today's fast changing economy, contributed a staggering 2 trillion $ to the global GDP in 2018 and might increase to 15.7 trillion $ by 2030.

A very recent AI tool, machine-learnt natural language processor (NLP) called ChatGPT has passed 1 million users in just 5 days, more than 100 million users in January and 1.6 billion visits in March 2023, which is causing widespread worries about security, privacy, intellectual property, and legal threats.

AI executives think it will enhance people's lives, while others are concerned about the hazards of unexpected behavior. However the idea of "alignment" is critical in ensuring that AI is consistent with human values, ethics and security. As Chat GPT-4 was examined for harmful applications by OpenAI, it was discovered that it may propose acquiring illegal weaponry or producing hazardous substances. Despite the adjustments before launching publicly, it is still hard to identify all potential misuses since such machine-learnt AI systems continue to learn from data.

Jack Clark, the founder of Anthropic, warns that each new AI system introduces unexpected capabilities and safety risks that are becoming more impossible to forecast. The fundamental procedures used to construct these systems are well established, but other organizations, nations, research institutes, and rogue actors may not take the same precautions so:

Is there any way to stop it from happening?

Any individual or legal entity, is trying to the get value out of AI, whether to improve our life processes or simply to stay competitive in the business world. As an evidence to this matter, Statista figures that the investment in artificial intelligence reached $93.5 billion in 2021, multiplied around 8 times more since 2015, so as mentioned earlier, AI has already started its journey and there is only the dramatical changes that catch our attentions once in a while.

It seems that does not make sense to put it back specially in the business context, when AI systems may deliver beyond what the outset value was predefined. Massive computing capabilities of data processing are advancing to the point where one may uncover new answers to any prospective subject that previously had no associated questions!

Navigating Cyber "Insecurity" in a Shrinking World!

The expansion of AI tempts more businesses to embrace it, jump on the AI superhighway, and begin employing products broadly without guaranteeing the accuracy of outcomes. Even while individuals and organizations may or may not use these technologies at their own risk, they are nevertheless obligated to keep up with such developments and must take an effort to adjust or rather balance their evolutions in terms of security, privacy, intellectual property, and legal concerns.

Notably, security is one of the facts at the front line to be impacted the most by AI systems, and along with that data security is becoming a heavy responsibility, but who will be exactly in charge of security of those data? Given that such systems are giant transformers and generate more immense data than ever, leading to more environmental complexity which, in turn, increases risk.

It is evident that distinct risks increase as something becomes more complex. This may bring an opacity for security defenders, but paraphrasing Charlie Bell, VP of security in Microsoft"AI will empower defenders to see, categorize, and interpret far more data much more quickly than ever which may not have been even feasible with big teams of security experts so far".

However according to a workforce study conducted by (ISC)2, with an estimation of currently 5 million security professionals globally, yet some study report that 3.5 million more cybersecurity workers are needed to secure assets effectively. Although this is at least positive news for cybersecure specialists that AI developments may support them in both ways, either by job opportunities or extracting AI stock for better security management, but will cybersecurity be enough to protect the AI outcomes?
CEO of Trusted AI an OutSecure Inc company, Pamela Gupta, believes that cybersecurity and privacy are not enough!
"What do we do need is trust and defined frame work on a risk-based approach"
When Artificial Intelligence is going to take bigger decisions for industries and these decisions will make the rules, to get the right value out of it, building trust and transparency in AI become crucial for businesses. Trust relies on security and privacy, but it's important to note that risk analysis is also at the core of cybersecurity.
To do a risk assessment, we firstly need to perceive environmental business operations and then conduct a threat model to determine what are the risks, where do they come from and to understand what is required to handle such risks.

Pamela Gupta, the cybersecure strategist, describes the fundamental pillars of a trustworthy AI, as security, privacy, ethics and transparency but in addition to these pillars she adds more pillars that are needed to govern the things looking into the foundation, which are explainability regulations, audit and auditability as well as accountability.
We often talk about data security when it comes to cyber security in AI and cyber security goes beyond just data poisoning.
To reach the pillar-based approach it is indispensable to understand how these AI system function and are built. As an explicit view, these systems work through training sets and data provided to them, where the bias comes from, because you have to feed them right and accurate data. If you feed them with data that is biased, that is one component that can go wrong. Besides we all know that there are erroneous data publicly available that can be scraped and trained not necessarily in a precise and evaluated way.

It should also note that even training sets made by algorithms can also have bias in their operations. As previously indicated in the instance of ChatGPT's evaluation before going entirely off the rails, it is always difficult to discover all potential misuses for these machine-learnt AI systems, which are only a subset of AI.

To conclude: The age of AI, security, and data is broad enough to delve into and examine indefinitely, yet one reality remains that we are all profoundly involved in this rapid-changing period whether we as individuals, businesses, and a community are thrilled about it or not. The credit management industry has no exception in facing these immense developments.

In My DSO Manager, while we always aim to stay tuned to all new technology developments, we have always been taking implementation precautions at the top of our new evolutions, whether its related to AI automation, data collecting of trade receivables analysis or simply optimizing algorithms. We believe that our long-path risk management expertise provides us with a leveraging opportunity in this fast-changing period to better undertake such frameworks of business risk assessments while beginning to use AI-based solutions.