1-888-643-2217 Email ABEX
Keeping you updated

Monthly Archives: October 2023

Debunking AI Hysteria

Should we be spooked by the current AI hysteria? While it’s true that AI risks are evolving fast, adopting the right approach can leave us better placed for what the future has in store.

The rise of AI has been nothing short of revolutionary. But as headlines shout about job-stealing algorithms and sentient AI plotting to overthrow us all, it’s easy to get caught up in a whirlwind of panic and paranoia. Fortunately, society doesn’t have to pack its bags for the AI apocalypse just yet.

The truth is that AI has been around for decades. In call centers as far back as the 1950s, automatic call distributors have routed calls to the right agent at the right time. In the 2010s, chatbots began to transform customer service. Now in the 2020s, AI has impacted most industries to some degree, with new use cases and risks bound to emerge in the future. The question is, how can we prepare for them?

The risks associated with AI
If AI has been around for such a long time, why the current hysteria? It’s all to do with accessibility. It used to be that AI tools were expensive and hard to use. These days, however, anyone can use tools like ChatGPT on the internet for free. This opens up a wealth of use cases, with businesses everywhere looking to AI for a competitive advantage. But to capitalize fully, they also need to address the risks that AI brings.

Nowhere is this more apparent than with generative AI. From GPT-4 to Bard, these tools are often built on large language models that analyze information from different sources, principally the internet. But who owns the content they produce? If a tool produces an image that includes unauthorized use of copyrighted material, who is liable? How about if the AI inadvertently infringes on third-party intellectual property (IP)? Or if a professional is given misinformation that leads them to taking actions that negatively impact their business or their clients?

In Hollywood, the writers’ strike showed just how AI can be a double-edged sword. The ability for AI tools to write scripts in seconds and list limitless episode ideas is a gamechanger for production businesses but has writers fearing for their livelihoods. Then there’s the matter of built-in bias. If a dataset for identifying credit risk in the UK is deployed in the US, the AI tool will lack accuracy. In the medical world, this can lead to patients not being flagged for health screenings, inaccurate diagnosis and inappropriate treatment.

That’s not all. There are privacy concerns, cyber security issues, a growing regulatory environment to consider. Different countries can take different approaches to regulation, and businesses must comply with all at once if they’re to operate in all those regions. Suddenly the AI hysteria makes a lot of sense. There are so many use cases, so many potential risks, that the future feels uncertain. But that doesn’t mean we can’t prepare ourselves for that uncertainty.

A futureproof approach
While AI risks are complex and varied, CFC has been writing them for years. Recently, they covered an AI-powered crop health assessment tool that provides actionable data to farmers in real time. To cover exposures across technology, errors and omissions, bodily injury, IP and cyber, they provided a $2 million limit. Then there’s an AI tool that generates virtual assets for backgrounds in video games. They provided the business with a $2 million limit for its technology, errors and omissions, bodily injury, IP and cyber exposures.

Still, in the AI space it’s vital to tread cautiously. Step into a fast-moving market with unknown liability issues and risk getting swept away by the current. The secret to success is knowing that AI risk isn’t the same as standard technology risk and we need to understand the nuances between them. That’s why more people are working on AI at CFC than at any other provider in market. They don’t have all the answers, but they do have some of the foremost experts in the industry ready to react wherever exposures arise.

It’s difficult to predict how this space will evolve. The only way of meeting AI risks head on is by broadening our outlook and considering emerging exposures with a cautious eye. This way, we can dampen the hysteria, prepare for opportunities that lie ahead and help build a society that’s maximizing AI. What better time to start than now.

Source: www.cfcunderwriting.com


Blog

FOLLOW OUR BLOG

Receive notifications of new posts automatically.



ABEX - AFFILIATED BROKERS EXCHANGE IS ON FACEBOOK.

Like us on Facebook

Connect with us on LinkedIn