+

ChatGPT is increasing the legal and compliance issues in business environments

Introduction In recent times, global interest in leveraging AI chatbots, including the flagship ChatGPT and other generative AI forms, has surged to streamline workplace tasks. A recent survey by a reputable organization revealed that over 40% of professionals have embraced ChatGPT and AI tools to enhance task efficiency and work quality. However, the widespread adoption […]

Introduction
In recent times, global interest in leveraging AI chatbots, including the flagship ChatGPT and other generative AI forms, has surged to streamline workplace tasks. A recent survey by a reputable organization revealed that over 40% of professionals have embraced ChatGPT and AI tools to enhance task efficiency and work quality.
However, the widespread adoption of AI tools poses potential threats to companies, especially concerning cybersecurity, data protection, intellectual property, and regulatory compliance.

Cybersecurity and Data Protection:
The increased use of generative AI tools in workplaces heightens the risk of data breaches and cybersecurity issues, particularly in regions like India, which has enacted the “Digital Personal Data Protection Act, 2023.” The AI tools, such as ChatGPT, can potentially expose confidential information, trade secrets, and personal data, leading to legal and reputational damage for businesses. Moreover, the storage of data on AI servers poses a risk of cyber-attacks and unauthorized access, resulting in the leakage of sensitive information.

Intellectual Property Concerns:
The usage of copyrighted data by AI systems may constitute copyright infringement, with potential liability for secondary infringement on the part of users. Scraping data for training AI models and developing deliverables could lead to legal consequences, as it may infringe on intellectual property and copyrights. Companies face the challenge of protecting their creative content from unauthorized use by AI tools and must navigate legal complexities surrounding data ownership and usage.

Regulatory Challenges:
The lack of specific regulations governing AI tools creates regulatory challenges. Copying and pasting data provided by AI tools may lead to violations of copyright and data protection laws. Legal notice responses generated by AI chatbots may complicate copyright issues due to the repetitive use of data, potentially leading to misunderstandings and legal consequences for companies. The absence of explicit consent for the use of personal data in AI training poses a threat to data protection compliance for companies utilizing AI tools.

Data Reliability Issues:
Generative AI applications, including ChatGPT, scrape information from the internet to answer user queries, raising concerns about data reliability and accuracy. Unlike humans, AI applications may not authenticate or discern the credibility of the data they provide, potentially leading to the dissemination of misinformation. This lack of human oversight poses a challenge in ensuring the authenticity and reliability of information generated by AI tools.

Conclusion:
While AI chatbots like ChatGPT offer valuable support in streamlining tasks and enhancing creativity, the potential legal risks associated with data security, intellectual property, and regulatory compliance cannot be ignored. The absence of specific regulations governing AI tools exacerbates these risks, urging the need for a regulatory framework to guide companies in their use of AI. Until such regulations are in place, companies and users of AI tools must exercise caution to mitigate legal threats and ensure responsible and compliant usage of AI-generated data in the workplace.

The author is the Head – Compliance Advisory Practices, Core Integra.

Tags: