Understanding the risks of generative AI for better business outcomes


Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


Any new technology can be an amazing asset to improve or transform business environments if used appropriately. It can also be a material risk to your company if misused. ChatGPT and other generative AI models are no different in this regard. Generative AI models are poised to transform many different business areas and can improve our ability to engage with our customers and our internal processes and drive cost savings. But they can also pose significant privacy and security risks if not used properly.

ChatGPT is the best-known of the current generation of generative AIs, but there are several others, like VALL-E, DALL-E 2, Stable Diffusion and Codex. These are created by feeding them “training data,” which may include a variety of data sources, such as queries generated by businesses and their customers. The data lake that results is the “magic sauce” of generative AI.

In an enterprise environment, generative AI has the potential to revolutionize work processes while creating a closer-than-ever connection with target users. Still, businesses must know what they’re getting into before they begin; as with the adoption of any new technology, generative AI increases an organization’s risk exposure. Proper implementation means understanding — and controlling for — the risks associated with using a tool that feeds on, ferries and stores information that mostly originates from outside company walls.

Chatbots for customer services are effective uses of generative AI

One of the biggest areas for potential material improvement is customer service. Generative AI-based chatbots can be programmed to answer frequently asked questions, provide product information and help customers troubleshoot issues. This can improve customer service in several ways — namely, by providing faster and cheaper round-the-clock “staffing” at scale.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 


Register Now

Unlike human customer service representatives, AI chatbots can provide assistance and support 24/7 without taking breaks or vacations. They can also process customer inquiries and requests much faster than human representatives can, reducing wait times and improving the overall customer experience. As they require less staffing and can handle a larger volume of inquiries at a lower cost, the cost-effectiveness of using chatbots for this business purpose is clear.

Chatbots use appropriately defined data and machine learning algorithms to personalize interactions with customers, and tailor recommendations and solutions based on individual preferences and needs. These response types are all scalable: AI chatbots can handle a large volume of customer inquiries simultaneously, making it easier for businesses to handle spikes in customer demand or large volumes of inquiries during peak periods.

To use AI chatbots effectively, businesses should ensure that they have a clear goal in mind, that they use the AI model appropriately, and that they have the necessary resources and expertise to implement the AI chatbot effectively — or consider partnering with a third-party provider that specializes in AI chatbots.

It is also important to design these tools with a customer-centric approach, such as ensuring that they are easy to use, provide clear and accurate information, and are responsive to customer feedback and inquiries. Organizations must also continually monitor the performance of AI chatbots using analytics and customer feedback to identify areas for improvement. By doing so, businesses can improve customer service, increase customer satisfaction and drive long-term growth and success.

You must visualize the risks of generative AI

To enable transformation while preventing increasing risk, businesses must be aware of the risks presented by use of generative AI systems. This will vary based on the business and the proposed use. Regardless of intent, a number of universal risks are present, chief among them information leaks or theft, lack of control over output and lack of compliance with existing regulations.

Companies using generative AI risk having sensitive or confidential data accessed or stolen by unauthorized parties. This could occur through hacking, phishing or other means. Similarly, misuse of data is possible: Generative AIs are able to collect and store large amounts of data about users, including personally identifiable information; if this data falls into the wrong hands, it could be used for malicious purposes such as identity theft or fraud.

All AI models generate text based on training data and the input they receive. Companies may not have complete control over the output, which could potentially expose sensitive or inappropriate content during conversations. Information inadvertently included in a conversation with a generative AI presents a risk of disclosure to unauthorized parties.

Generative AIs may also generate inappropriate or offensive content, which could harm a corporation’s reputation or cause legal issues if shared publicly. This could occur if the AI model is trained on inappropriate data or if it is programmed to generate content that violates laws or regulations. To this end, companies should ensure they are compliant with regulations and standards related to data security and privacy, such as GDPR or HIPAA.

In extreme cases, generative AIs can become malicious or inaccurate if malicious parties manipulate the underlying data that is used to train the generative AI, with the intent of producing harmful or undesirable outcomes — an act known as “data poisoning.” Attacks against the machine learning models that support AI-driven cybersecurity systems can lead to data breaches, disclosure of information and broader brand risk.

Controls can help mitigate risks

To mitigate these risks, companies can take several steps, including limiting the type of data fed into the generative AI, implementing access controls to both the AI and the training data (i.e., limiting who has access), and implementing a continuous monitoring system for content output. Cybersecurity teams will want to consider the use of strong security protocols, including encryption to protect data, and additional training for employees on best practices for data privacy and security.

Emerging technology makes it possible to meet business objectives while improving customer experience. Generative AIs are poised to transform many client-facing lines of business in companies around the world and should be embraced for their cost-effective benefits. However, business owners should be aware of the risks AI introduces to an organization’s operations and reputation — and the potential investment associated with proper risk management. If risks are managed appropriately, there are great opportunities for successful implementations of these AI models in day-to-day operations.

Eric Schmitt is Global Chief Information Security Officer at Sedgwick. 

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers



Source link: https://venturebeat.com/ai/understand-risks-generative-ai-better-business-outcomes/

Sponsors

spot_img

Latest

Porzingis posterizes Lopez in dominating first half vs. Bucks

Porzingis posterizes Lopez in dominating first half vs. Bucks originally appeared on NBC Sports BostonKristaps Porzingis put on a show in the first...

Ross Chisholm retires, Charlie Matthews leaves

Harlequins have announced that stalwart Ross Chisholm is to retire and experienced lock Charlie Matthews will leave the club at the end...

Iga Swiatek’s coach makes admission about loss that ended the Pole’s run of 37 wins

Coach Tomasz Wiktorowski revealed Iga Swiatek's decision-making was the thing that was making him the happiest throughout the season but admitted there...

Bucket List of the Dead Anime Adaptation Gets Trailer

G/O Media may get a commissionWant more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases,...

Cardano Founder Lost For Words After ETH Founder Praises UTxO 

Ethereum Founder Vitalik Buterin sparked drama by acknowledging that the UTxO model could fix the...