Using ChatGPT as an Enabler for Risk and Compliance


Organizations face many challenges regarding cybersecurity, including keeping up with the ever-evolving threat landscape and complying with regulatory requirements. In addition, the cybersecurity skill shortage makes it more difficult for organizations to adequately staff their risk and compliance functions. According to the (ISC)2 2022 Cybersecurity Workforce Study, the global cybersecurity workforce gap has increased by 26.2%, with 3.4 million more workers needed to secure assets effectively.

Organizations must employ technologies like artificial intelligence (AI), collaboration tools and analytics to cope with the situation efficiently. To that end, ChatGPT can be an enabler for organizational governance, risk and compliance (GRC) functions.

ChatGPT Addresses Typical GRC Use Cases

A significant advancement in natural language processing (NLP) technologies has allowed for more accurate and nuanced language analysis, leading to better and more reliable insights. By leveraging the power of NLP technology, ChatGPT, the AI-based chatbot developed by OpenAI, can generate coherent and relevant responses to a wide range of questions and topics. Some reasons for ChatGPT’s popularity include the ability to understand human language and context, tailor responses based on previous conversations and retrieve vast amounts of information. ChatGPT is powered by GPT-3.5-turbo, OpenAI’s most advanced language model. OpenAI has also developed various language models with different capabilities and use cases, like Codex, DALL-E, Ada and Davinci.

With ChatGPT, GRC analysts have a valuable tool to use in navigating the world of risk and compliance. Let’s explore some of the GRC use cases ChatGPT can address. 

Generate Framework, Policy and Procedure Documents

GRC analysts can use ChatGPT to generate draft policy or procedure documents by supplying basic information and guidelines. It can then use its NLP capabilities to create a coherent, well-structured document that meets the company’s GRC management requirements.

Additionally, GRC analysts can use it to evaluate policy and procedure documents by inputting a completed document into the model. Analyzing policy documents, providing feedback on effectiveness and highlighting areas that might require revision can also support the analyst.

Using ChatGPT for policy and procedure document creation and evaluation saves GRC analysts time while improving the overall quality of the documents.

But while ChatGPT can create the initial draft, analysts must use human judgment and expertise to evaluate and refine it to ensure that the information is appropriate for the company’s specific needs and complies with all relevant regulations and standards.

Manage Regulatory Compliance

ChatGPT can be a valuable tool to help GRC analysts manage compliance and minimize the risk of fines and penalties. Recently, Meta was hit with $414 million in fines by the European Union’s leading privacy watchdog. While traditional methods of managing compliance may still be in use, they may not be able to keep up with the rapidly changing regulatory landscape. Hence, ChatGPT is valuable for the following use cases:

  • It can help by reviewing and analyzing vast amounts of data from agencies worldwide. It can filter the data, find the specific requirements applicable to the organization/sector/geography and assess gaps in the existing processes to provide recommendations. This systematic approach can give the organization confidence in its approach to monitoring and maintaining regulatory requirements. For example, an IT organization complying with environmental, social and governance (ESG) regulations can leverage the technology for detailed insight related to the EU’s Energy Efficiency Directive for data centers or addressing modern slavery and human trafficking disclosure requirements under the UK’s Modern Slavery Act.
  • It can analyze compliance-related communications, such as articles, regulatory filings, emails and chat messages, for potential compliance risks. The model could be trained to detect keywords that indicate potential compliance issues and flag them for further review. However, ChatGPT is a chatbot, not a regulatory compliance tool. While it can analyze and understand text data related to compliance and regulation, it cannot actively monitor a company’s compliance or regulatory status.

Enhance Risk Assessment

ChatGPT’s vast knowledge of various industries and risk data should be leveraged to identify relevant risk factors. Risk managers can share information, such as incident reports, audit reports and regulatory filings, to identify potential risks. Analyzing this information will help the risk manager evaluate the risk’s impact quickly and accurately. For example, ChatGPT could be trained to analyze social media posts related to customer complaints to identify common patterns. It could then assess the likelihood and potential impact of those complaints on the company’s reputation.

The platform can also generate risk assessment reports that identify potential risks and provide recommendations for mitigation.

As an AI language model, ChatGPT can also potentially identify risks from network architecture diagrams. It may not be able to analyze the diagram itself, as it is an image, not text. Still, it can analyze textual descriptions and labels within a network architecture diagram to identify potential risks and vulnerabilities.

Improve Fraud Detection

Fraud is a significant risk for most organizations and is difficult to detect, especially when dealing with large volumes of data. ChatGPT could help process text data looking for potential fraud. A few examples include:

  • Emails: By analyzing emails, companies can identify patterns of communication that may indicate fraud, such as employees communicating with known fraudulent actors or discussing fraudulent activities.
  • Social Media: Social media platforms can be a rich data source for detecting potential fraud. By analyzing social media posts and activity, companies can identify individuals or groups that may be engaging in fraudulent activity and patterns of behavior or language that may be associated with fraud.
  • Invoices: Analyzing invoice data helps companies identify suspicious patterns, such as duplicate invoices or invoices from unknown vendors.
  • Compliance Documentation: Compliance documentation, such as audit reports and certifications, can help identify potential fraud or noncompliance.

However, text data analysis alone is insufficient for comprehensive fraud detection. It is most effective in conjunction with other fraud detection methods, such as data analytics and human expertise.

Support Third-Party Assessment Program

An analyst can leverage ChatGPT for a range of activities in a third-party assessment program:

  • It can be trained to provide guidance on assessment criteria and assist third-party assessors in understanding specific assessment requirements. It can provide information on industry best practices for security and compliance, including frameworks such as SOC 2, Payment Card Industry Data Security Standard (PCI DSS) and Health Insurance Portability and Accountability Act (HIPAA).
  • The model can analyze assessment data and identify patterns or trends, like analysis of third-party survey responses or audit findings, to identify common themes or areas where the organization may need to improve its compliance program.
  • It can conduct risk assessments based on specific criteria, such as risk factors related to a particular industry or region. This can help third-party assessors identify potential risks and provide guidance on implementing security controls to mitigate them.
  • It can provide ongoing education and training for third-party assessors.

Conduct Training and Awareness

With ChatGPT, companies can provide analysts with a more engaging and personalized training experience tailored to their specific needs. The company could create a platform-based compliance training chatbot with which employees and analysts can interact using natural language. The chatbot could be trained on the company’s specific compliance policies, procedures and regulatory requirements. This can help improve employee engagement and retention of key compliance concepts while reducing the risk of non-compliance and regulatory violations.

There are many other use cases where GRC professionals can benefit from using ChatGPT. Though the platform provides a wealth of information on security and compliance topics, it is not a substitute for working with qualified professionals, such as auditors, security consultants and legal advisors. Also, the use of ChatGPT and training other OpenAI models come with data privacy risks. Security experts must properly evaluate and mitigate these to take full advantage of ChatGPT.

The post Using ChatGPT as an Enabler for Risk and Compliance appeared first on Security Intelligence.