Increasingly, scientific authors have been including disclaimers in publications relating to the use of Large Language Models (LLMs), such as ChatGPT, Grok, DeepSeek and CoPilot.1 These disclaimers generally do not concern the substance of the paper but rather note assistance in “proofreading” or “increasing readability.” On its face, having an LLM improve a paper’s legibility for the public and other scientists or engineers, particularly in subject areas full of jargon, appears to be a valuable use of this technology. However, authors should be wary of the potential legal implications of such use.
Any disclosure to the public of an invention before the patent application's filing date can jeopardize the validity of the resulting patent.
The terms of use for some of the most popular LLMs often make no reference to confidentiality and may explicitly state that certain information resulting from user input will be used to train the LLM or for the company’s “own business purposes.”2 Thus, the ability of companies that provide LLM services to use or disclose user inputs is often unclear. Even if disclosure to an LLM does not itself constitute public disclosure, the company behind the LLM may nevertheless have access to the input and the ability to disclose it. Further, even if the company behind the LLM may have no intention to access or disclose the inputted confidential information, a data leak could still inadvertently reveal this information to malicious actors. This vulnerability also applies to other non-AI tools, as any software, website or service which receives confidential information from an innovator could store that information on their networked systems.
Some LLM providers attempt to address these risks through paid “business” and “enterprise” subscriptions, which typically promise more robust security features. For example, some premium LLM offerings exclude user inputs, outputs, and metadata from model training by default, while other offerings allow users to opt-out of model training for free.3 Although these measures may reduce the risk of data leaks, they do not eliminate the fact that, if used by an inventor to proofread a manuscript, pre-publication information about inventions could still be transmitted to, and stored, on the LLM provider’s servers. It is questionable, however, whether this would be considered disclosure to the public.
Other providers offer so-called “air-gapped” LLMs, which are deployed entirely within a company’s or government’s own computing environment and disconnected from outside servers. These solutions appear to provide a stronger safeguard, but they are generally available only under custom contracts with government or defense agencies and remain out of reach for most innovators.
Given the opaque functioning of LLMs, submitting a manuscript that contains confidential information or details of potential inventions may amount to a public disclosure of that information or those inventions.
While the topic of public disclosure may seem self-evident given that a manuscript is ultimately intended for publication, the editorial and publication process associated with many journals can take years. Accordingly, if a manuscript is disclosed to an LLM, and such disclosure is considered “public disclosure,” it could be fatal to the novelty of an invention.
Public disclosure rules vary by country
Novelty, as a fundamental requirement for patentability, demands that an invention not be disclosed to the public anywhere in the world before the relevant filing date. However, the extent to which disclosures made by inventors themselves affect an invention’s novelty varies widely across jurisdictions. For example, Canada and the United States each provide a relatively permissive twelve-month grace period, allowing inventor-originated disclosures, such as an early conference presentation or manuscript circulation, to be permitted if a patent application is filed within one year. By contrast, the European Patent Office adopts a strict approach, where almost any pre-filing disclosure destroys novelty, subject only to narrow exceptions for unauthorized disclosures or disclosures at recognized exhibitions. As well, China offers only a six-month grace period, and only in limited circumstances, such as government-recognized exhibitions, certain academic conferences, or unauthorized disclosure.
Until courts around the world address whether using LLMs constitutes public disclosure of an invention or new legislation provides clarity, this issue will remain uncertain. Further, as we have seen with other issues, results may vary across jurisdictions.
Best practices
Although there is presently no clear judicial or legislative guidance relating to the use of LLMs and public disclosure of inventions, inventors and applicants should take care to maintain strict confidentiality practices regarding potential inventions. More particularly, users of LLMs should only input information and data that are:
- not bound by a confidentiality agreement or duty from disclosing; and
- unrelated to a potential invention.
For example, applicants and inventors should avoid inputting invention descriptions, test data, research notes and other data related to a potential invention into an LLM. Any information that will be input into an LLM should be scrutinized for any content related to the potential invention, especially in view of any other information about the invention that was previously inputted into the LLM. This will help reduce the risk that the LLM will accumulate a description of the potential invention over multiple uses, either by the same or multiple users.
Recognizing the reality of large corporate applicants and individual inventors, workplace policies regulating the use of LLMs may also form part of effective confidentiality strategies. These policies may include documentation requirements, multi-stage review of LLM input prompts, and auditing of LLM and other AI tool usage. Some policies may also require the cycling of LLMs and AI tools used, to avoid providing too much potentially sensitive data to a single tool. Applicants and inventors should also carefully consider the data policies of the LLMs and AI tools they use.
In summary, for IP-minded researchers and inventors, the risk-reward balance of using LLMs in contexts involving potential public disclosure and patentability appears to favour either avoiding their use altogether or, more practically, strategically limiting such use to aspects that could not constitute public disclosure of potential inventions.
The preceding is intended as a timely update on Canadian intellectual property and technology law. The content is informational only and does not constitute legal or professional advice. To obtain such advice, please communicate with our offices directly.
References
1. For example, see the “Declaration of generative AI and AI-assisted technologies in the writing process” in the paper: https://doi.org/10.1016/j.tifs.2024.104562.
2. For example, ChatGPT and OpenAI (see the subheading “Services for individuals, such as ChatGPT, Codex, and Sora” in the article here: https://help.openai.com/en/articles/5722486-how-your-data-is-used-to-improve-model-performance. Also, Anthropic (see the update: https://www.anthropic.com/news/updates-to-our-consumer-terms). Also, Gemini (see the guidance: https://cloud.google.com/gemini/docs/discover/data-governance).
3. For example, ChatGPT and OpenAI (see the subheading “Services for businesses, such as ChatGPT Business, ChatGPT Enterprise, and our API Platform” in the article: https://help.openai.com/en/articles/5722486-how-your-data-is-used-to-improve-model-performance.
Related Publications & Articles
-
Federal Court of Appeal sets aside two-stage test for assessing “due care”
On September 5, 2025, the Federal Court of Appeal (FCA) set aside the decision of the Federal Court (FC) in Matco Tools Corporation v Canada (Attorney General), 2025 FC 118, and restored a decision of...Read More -
Supreme Court of Canada reserves decision on appeal relating to the patentability of methods of medical treatment
Today, the Supreme Court heard oral arguments in Pharmascience Inc v Janssen Inc (Supreme Court File No 41209) and reserved its decision.Read More -
Canadian Intellectual Property Office “Next Generation Patents” update and status as of October 9, 2025
On July 17, 2024, the Canadian Intellectual Property Office (CIPO) launched a new electronic system and portal, MyCIPO Patents, as part of its Next Generation Patents initiative. While the platform wi...Read More
