top of page

AI: Are You In Or Are You Out?


A significant minority of large enterprises (those with 1,000+ employees) have chosen to completely block access to public large language models (LLMs) on work systems. Recent surveys indicate that roughly one-quarter to one-third of such companies instituted outright bans on tools like OpenAI’s ChatGPT by late 2023 newsroom.cisco.comsecuritymagazine.com.


For example, Cisco’s January 2024 Data Privacy Benchmark study (covering 2,600 organizations worldwide) found 27% of organizations had at least temporarily banned generative AI use in the workplace newsroom.cisco.com. Similarly, an October 2023 security leadership survey by ExtraHop reported 32% of respondents’ organizations banned employees from using generative AI tools securitymagazine.com.


This trend was already building in mid-2023, BlackBerry’s global poll of IT decision-makers showed 75% of organizations were implementing or considering bans on ChatGPT and similar AI apps darkreading.com (with 61% of those expecting the ban to be long-term). In absolute terms, given the number of large firms globally, these percentages suggest thousands of companies have put blanket prohibitions in place.



Rationale for bans: 


The drivers behind full blocks are primarily data privacy and security concerns. Companies worry that employees might inadvertently feed confidential information into external AI systems, which could then be exposed or used to train the AI. In Cisco’s survey, 69% of businesses cited threats to legal and intellectual property (IP) rights, and 68% cited risk of sensitive information disclosure (e.g. data leaking to the public or competitors) as top concerns with generative AI newsroom.cisco.com.


Other common worries include compliance with privacy laws, potential data breaches, and even reputational risk if AI outputs are incorrect or inappropriate. Some firms also fear productivity and quality issues – for instance, bankers in one industry survey expressed concern that AI’s sometimes inaccurate “hallucinations” could mislead staff or clients americanbanker.com, and that over-reliance on AI might erode employees’ skills over time.


Privacy laws like GDPR (in Europe) and sector-specific regulations (such as HIPAA in healthcare or FINRA rules in finance) amplify these concerns by prohibiting the sharing of personal or sensitive data with unvetted third parties. In short, many large enterprises felt a temporary ban was the safest immediate step while they evaluated long-term governance for AI.


Employee and consumer concerns about generative AI (survey data from 2024). Protection of intellectual property and prevention of data leaks to the public or competitors are the top issues cited by organizations, with nearly 68–69% highlighting these risks newsroom.cisco.com.
Accuracy of AI outputs and broader ethical implications are also significant concerns.

It’s worth noting that these bans were often proactive or precautionary. In several cases, companies instituted them right after ChatGPT’s release gained popularity, due to specific incidents (e.g. an employee exposing confidential code) or simply to get policies in place. Some of the bans are explicitly temporary. For instance, certain U.S. government agencies blocked AI tools “for the time being” to develop guidelines for future use fedscoop.com. Overall, roughly 25–30% of large firms globally have opted for a full block on public LLMs in 2023–2024, primarily to sidestep data privacy, security, and compliance risks newsroom.cisco.com.


Partial and Controlled Access Policies


While outright bans make headlines, most large companies have chosen a middle ground – allowing employees to use generative AI tools in a limited or controlled manner. Surveys show that a majority of organizations have established guidelines or technical restrictions rather than total prohibition. In Cisco’s global study, 63% of companies set rules on what data can be entered into AI (for example, forbidding any confidential or personal data in prompts), and 61% restrict which AI tools employees may use newsroom.cisco.com. In fact, almost all organizations surveyed had at least one such control in place by the end of 2023, indicating that very few are taking a laissez-faire approach newsroom.cisco.com. Another poll (Kong’s 2024 API Impact Report) likewise found 80% of organizations have guidelines or restrictions on AI use for their staff konghq.com. These partial measures span a spectrum of approaches:


  • Acceptable Use Policies: Companies are issuing clear policies detailing how generative AI may (or may not) be used. Common rules include banning input of sensitive data, prohibiting use of AI-generated content in official documents without review, or requiring anonymity for any real customer or employee info used in prompts. For example, the U.S. Department of Veterans Affairs allows AI experimentation but forbids inputting any private data into public AI systems fedscoop.com. Many firms require employees to attend training or acknowledge guidelines before using tools like ChatGPT. (Nonetheless, only about 42% of organizations had formally trained users on safe AI use as of late 2023 securitymagazine.com, indicating a gap between policy and education.)


  • Tool Selection and Whitelisting: Rather than permitting any AI app, companies often whitelist approved tools. For instance, some organizations allow Microsoft’s Bing Chat Enterprise or OpenAI’s ChatGPT Enterprise (which offers data encryption and no data retention for training) but block the free public ChatGPT site. According to Cisco, 61% of companies limit which generative AI platforms can be accessed newsroom.cisco.com. This might mean only company-vetted, enterprise-grade AI systems are enabled on the corporate network. We also see firms deploying in-house LLMs or sandboxed AI environments – letting employees use AI with company data internally so that nothing is sent to an external provider. For example, the Department of Energy (DOE) in the U.S. set up a secure generative AI testing sandbox for employees after blocking external AI, allowing experimentation in a controlled setting fedscoop.com.


  • Network and IT Controls: Many large enterprises use technical means to enforce AI restrictions. Company IT departments have added ChatGPT and similar domains to blocked site lists on work networks or devices (especially for firms with strict bans). Others implement data loss prevention (DLP) software that detects and prevents certain information from being sent to AI services. In highly sensitive environments, some organizations route employee internet traffic through VPNs or secure web gateways that filter requests to AI APIs – effectively acting as a guardrail. These measures align with the concept of unified endpoint management, which 80%+ of IT managers agree is within an organization’s rights to control darkreading.com. The downside is tech-savvy employees can sometimes find workarounds.


  • Role-Based or Contextual Access: Another flavor of partial access is limiting AI usage to specific roles, departments, or tasks. A March 2024 survey of banks illustrates this: while 15% of banks banned AI for all employees, another 20% allowed it only for certain staff or use cases (e.g. data scientists or non-customer-facing work) americanbanker.com. Companies in consulting or software development might permit generative AI for coding help in sandbox projects, but not for client data analysis. Some organizations require management approval for each intended AI use case, effectively throttling unregulated usage.


These controlled approaches show that enterprises are trying to balance innovation with oversight. They acknowledge the productivity benefits of tools like ChatGPT – using them for tasks such as code generation, drafting content, or research – but under conditions that minimize risk. Notably, even with guidelines in place, enforcement remains a challenge: surveys reveal that 60% of employees admit to ignoring or circumventing AI usage rules in their workplace konghq.com. Additionally, ExtraHop’s study found that despite 32% of companies banning AI, only 5% reported that employees never use these tools, implying many employees still find ways to use AI illicitly securitymagazine.com. This “shadow AI” usage is prompting companies to improve monitoring and perhaps loosen bans in favor of safer alternatives (since outright prohibition can be impractical to police).


In summary, partial restrictions, not outright bans, are the norm at most large firms. Roughly two-thirds of organizations allow some form of generative AI usage but with safeguards newsroom.cisco.com.


Common controls include limiting data inputs, restricting accessible AI tools, monitoring usage, and training employees on compliance. This approach reflects a recognition that generative AI can drive efficiency, so the goal is to manage the risk rather than eliminate the technology entirely.


Global Breakdown by Region


Adoption of AI restrictions varies across regions, influenced by regulatory climates and cultural attitudes toward data security. While quantitative data by region is limited, we can discern several patterns for complete bans vs. controlled use in major areas:


  • North America: The United States and Canada saw early corporate moves to restrict ChatGPT, especially in data-sensitive industries. Numerous Fortune 500 companies in the U.S. imposed bans or tight rules in 2023 – for instance, major banks like JPMorgan Chase and Bank of America, and tech firms like Apple and Verizon, all barred internal ChatGPT use over data-leak fears zoomrx.com. U.S. government agencies also reacted quickly: by early 2024, the Departments of Energy, Agriculture, Veterans Affairs, and others had temporarily blocked generative AI tools on federal networks fedscoop.comfedscoop.com. These blocks were driven by concerns about safeguarding classified and sensitive information. In Canada, over a quarter of companies similarly banned GenAI outright, mirroring global stats hrreporter.com. Despite these restrictions, North America also has high AI adoption in many workplaces. By the end of 2024, about 72% of U.S. offices had integrated ChatGPT into workflows techradar.com – slightly below the 76% global average, possibly due to cautious policies in some firms. The gap in adoption has been attributed in part to corporate restrictions: many U.S. businesses without prior AI exposure have been hesitant or have policies slowing down use techradar.com.


    In essence, North America shows a dichotomy: a strong push to leverage AI (especially in the tech sector) tempered by strict compliance measures in sectors like finance, healthcare, and government.


  • Europe: European companies generally exhibit a more guarded approach to workplace AI, largely owing to strict data protection laws and regulatory scrutiny. The EU’s General Data Protection Regulation (GDPR) imposes heavy responsibilities on handling personal data, which makes many European employers wary of tools that could potentially mishandle such data. A notable event was Italy’s nationwide ban on ChatGPT in March 2023 due to privacy concerns, which was lifted after OpenAI implemented age checks and greater transparencagileblue.com. This episode alerted companies across Europe to the compliance risks of generative AI. As a result, many EU-based firms (especially in Germany, France, and Italy) temporarily suspended employee use of ChatGPT until they could verify GDPR compliance or until OpenAI offered assurances. By 2024, surveys suggested European organizations were implementing policies at rates similar to North American firms – e.g., about one in four had outright bans (in line with the global 27% finding newsroom.cisco.com). However, partial restrictions are more prevalent, with many EU companies opting for strict guidelines (what data can be input, requiring anonymity, etc.) instead of blanket bans. European financial institutions and public-sector bodies have been among the most restrictive (concerned with privacy and sovereignty of data), whereas sectors like manufacturing or retail in Europe may be comparatively more open with proper guidance.


    Upcoming regulations (the EU AI Act being formulated) also influence behavior: companies anticipate future compliance requirements and thus are proactively curbing AI use to avoid legal pitfalls. In summary, European enterprises are likely to restrict GenAI usage unless they are confident in compliance – leading to widespread controlled use and a fair share of outright blocks in 2023–2024.


  • Asia-Pacific: The APAC region displays high variability in LLM access policies, reflecting diverse regulatory regimes and tech landscapes. In East Asia, concerns over data leaks led several prominent companies to ban ChatGPT early. For instance, South Korea’s Samsung banned employee use of ChatGPT in May 2023 after an incident where engineers accidentally uploaded sensitive code to the AI businessinsider.com.


    Many Japanese corporations initially issued guidelines warning staff about inputting confidential information into AI; only a few outright banned it, as Japan’s culture leans toward finding a pragmatic balance (the government there has generally encouraged AI innovation while emphasizing data responsibility). A late-2023 survey by Japan’s business federations indicated nearly 50% of Japanese firms were drafting policies on ChatGPT use, often limiting usage to non-confidential contexts.


    In China, the situation is unique: the Chinese government blocks ChatGPT entirely as part of internet censorship and concerns over foreign AI influence. Consequently, multinational companies operating in China have no access to ChatGPT from corporate networks, effectively a government-mandated ban.


    Chinese firms use domestic generative AI alternatives (like Baidu’s ERNIE Bot), and even those are heavily monitored. In contrast, India has emerged as a very enthusiastic adopter of AI in the workplace. One global survey found 92% of surveyed workplaces in India were using ChatGPT by late 2024 techradar.com – the highest rate globally. This implies relatively fewer restrictions in Indian companies, possibly due to a burgeoning tech sector eager to boost productivity and a regulatory environment that, until recently, was less strict on data privacy (India’s Digital Personal Data Protection Act was only enacted in late 2023).


    Other regions like Southeast Asia and Oceania fall somewhere in between: for example, Australian firms are mindful of data protection but generally allow controlled use of AI, while Singapore’s finance regulators have advised caution leading banks there to adopt internal AI solutions rather than public tools. Broadly, Asia-Pacific spans extremes – from very high adoption in regions like India (indicating minimal internal bans) to sweeping blocks in places like China – with many countries adopting pragmatic partial-use policies that emphasize security (especially in tech-forward economies like Japan, South Korea, Singapore).


  • Other Regions: In the Middle East, interest in AI is growing, but governments and large enterprises (especially in sectors like banking) tend to follow a conservative approach similar to Europe’s. Although this is changing and quickly! For instance, Gulf banks have issued advisories against using public AI with client data, although the region is embracing of any new technology that addresses privacy, data loss and security concerns, especially if they are being addresses as part of an "overall single app approach". Some countries are also influenced by religious or cultural considerations about AI-generated content, though data privacy remains the core concern.


    In Latin America, fewer large firms were early adopters of ChatGPT, and thus fewer formal bans have been reported; however, as AI interest rises, larger Brazilian or Mexican companies are starting to craft usage policies (often borrowing from U.S./European multinationals’ guidelines). We are likely to see those regions’ numbers align with global trends as awareness increases.


Regional differences in LLM restrictions tend to mirror regulatory stringency and local risk perception. Europe and North America have a high prevalence of both full and partial restrictions driven by laws and litigation fears. Asia-Pacific is split, with some of the highest AI usage (e.g. India) but also notable country-level bans (China) and company bans (South Korea). All regions share a common theme: organizations act in accordance with how much they trust the legal/regulatory framework to manage AI risks, often erring on the side of caution where uncertainty exists.


Industry Context: Sectors with High Restriction Rates


The degree to which companies restrict generative AI often depends on the industry. Sectors handling very sensitive data or operating under strict compliance requirements have been far more likely to ban or tightly limit AI access than those in less-regulated environments. Below, we examine key industries highlighted for their cautious stance – banking/finance, healthcare (pharma/biotech), government, and technology, including which types of organizations are most restrictive and why.


Banking and Finance


Financial institutions have been at the forefront of imposing AI restrictions, given the high stakes of data confidentiality and regulatory compliance in this sector. Surveys in early 2024 showed that banks are widely enacting controls on generative AI. In an American Banker survey of finance professionals, 30% of banks had some form of ban on gen AI tools for employees americanbanker.com.


Notably, 15% of banks reported a complete ban for all staff, and another 20% allowed AI use only for specific employees or use cases (with everyone else barred) americanbanker.com. This indicates nearly one-third have firm prohibitions, and many others apply selective limits. Indeed, in 2023 many Wall Street institutions reacted swiftly to ChatGPT’s rise: JPMorgan Chase, Bank of America, Goldman Sachs, Citigroup, Deutsche Bank and others all banned employees from using ChatGPT at work zoomrx.com. These banks cited reasons such as the risk of exposing private financial data, client information, or proprietary trading algorithms by typing them into an external system.


Regulatory drivers are strong in finance. Banks must comply with privacy laws (like GDPR or GLBA), but also regulations on outsourcing technology and data residency. Inputting data into ChatGPT could be seen as sharing with a third-party processor without proper due diligence, which compliance officers flagged as unacceptable. Moreover, cybersecurity departments in banks are wary of “unsecured apps” – BlackBerry’s 2023 research noted that 83% of IT leaders feared unmonitored generative AI apps pose a cybersecurity threat to the IT environment darkreading.com.


There’s also a client trust and legal liability aspect: if a financial advisor or analyst used ChatGPT and it produced an inaccurate answer that was relayed to a client, the bank could face significant legal risk. This concern about AI’s tendency to generate plausible-sounding but wrong information (hallucinations) is frequently cited by bankers americanbanker.com.


Given these factors, finance sector companies are among the most likely to enforce strict AI usage policies. Many started with outright bans in 2023. As we move into the latter part of 2025, some large banks have begun cautiously exploring AI in controlled ways – for example, using private LLM instances or vendor solutions explicitly designed for banking that don’t expose data. However, until those solutions mature, the default stance for most big banks and insurance firms is “block first, then slowly enable with guardrails.” Financial firms also often coordinate with each other on best practices (through industry groups), and currently the conservative approach is prevalent.


Healthcare and Pharmaceuticals


The healthcare, biotech, and pharmaceutical industries handle extremely sensitive information, from patient health records to clinical trial data, making them similarly cautious about generative AI. A survey reported in April 2024 found that most pharma and biotech companies had banned or restricted ChatGPT usage by employees zoomrx.com. About 50% of pharma/biotech firms overall disallowed or tightly limited the tool, and among the largest 20 pharmaceutical companies, this jumped to 65% having restrictions in place zoomrx.com.


This high rate reflects deep concerns about patient privacy (protected by laws like HIPAA in the US and similar regulations globally) and protection of valuable research and IP. Pharmaceutical companies fear scenarios such as a researcher inadvertently inputting confidential drug formula or trial results into ChatGPT – effectively leaking proprietary science that could later surface elsewhere. Indeed, the primary reason cited by nearly all respondents in the pharma survey was “security and the potential to leak internal data.”zoomrx.com


Hospitals and healthcare providers are also wary. Many hospital IT departments have disabled access to ChatGPT on work devices unless through a vetted interface. The concern is that a doctor or administrator might include personally identifiable patient details in a query (say, to draft a letter or summarize a case), which would violate privacy regulations.


There’s also uncertainty about how AI services store input data; until it’s guaranteed that no patient data is retained or shared, many healthcare organizations prefer to block these tools outright. Some large healthcare networks in the U.S. and Europe announced bans in 2023 pending further review by their compliance boards.


However, like in finance, the healthcare sector sees great potential in AI if handled correctly; for example, assisting with medical record summaries or research synthesis. Therefore, some companies are pivoting to controlled solutions: a few big pharma firms have reportedly built internal GPT-like models trained on publicly available scientific literature (avoiding any patient data) so that employees can use AI without risking leaks. Others allow ChatGPT for general non-sensitive tasks but implement strict training and monitoring. Notably, 80% of pharma employees surveyed believed AI is overrated and only 25% used ChatGPT more than once a week zoomrx.com – indicating that, at least in 2023, adoption was slow either due to bans or skepticism.


This might change as comfort grows. For now, healthcare and pharma remain among the top industries to prohibit generative AI on work systems, with data privacy and intellectual property protection being the driving forces.


Government and Public Sector


Government agencies worldwide have shown some of the strongest aversions to unrestricted AI usage. Because governments handle classified information, sensitive citizen data, and critical infrastructure, the introduction of tools like ChatGPT has been met with intense scrutiny. In the United States, by the end of 2023, a number of federal agencies had explicitly banned or blocked access to ChatGPT on government-issued devices and networks. For example, the Department of Defense (Pentagon) and some intelligence agencies reportedly restricted AI use early on. In civilian agencies, the Department of Energy’s CIO temporarily blocked ChatGPT for all employees in January 2024 fedscoop.com. The Department of Veterans Affairs likewise confirmed that neither ChatGPT nor similar AI services were accessible on the VA network fedscoop.com. The Department of Agriculture went as far as banning the use of ChatGPT and other external generative AI tools on any government equipment fedscoop.com, after assessing the risk level as “high.” Even the Social Security Administration issued a temporary block fedscoop.com. The rationale in all these cases is consistent: until proper safeguards and approved use cases are established, the default is to prevent any chance of sensitive governmental data leaking or being inadvertently shared with an AI that’s not under government control.


Different government bodies have slightly different approaches. Some, like the DOE, are simultaneously working on approved AI solutions (e.g., via Microsoft Azure’s OpenAI service or other cloud providers) to allow some generative AI use internally under tight oversight fedscoop.com. The Department of Homeland Security, for instance, conditionally approved a few AI tools (including ChatGPT and Bing Chat Enterprise) for limited pilot programs with employee training and case-by-case approval of use cases fedscoop.com. This indicates that the public sector is not monolithic – some agencies are testing the waters in secure environments, even as others implement broad bans. Agency heads are forming task forces (e.g., the “ChatGPT Taskforce” in some European governments) to study how AI can be used responsibly in government operations without exposing data or violating procurement rules.


Outside the U.S., other governments reacted in 2023: Italy’s national ban on ChatGPT (though temporary) effectively forced public-sector offices in Italy to block the tool until compliance fixes were in place agileblue.com. In France, the data protection authority (CNIL) issued guidelines discouraging use of services like ChatGPT with personal data, which influenced many French ministries to restrict employee usage. Some UK departments initially banned staff from using ChatGPT, though one large department (DWP – Department for Work and Pensions) later reversed its ban and moved to an “explore with caution” stance in 2024, showing the evolving mindset.


Overall, the government sector tends toward stricter end of the spectrum: likely a majority of large government agencies (especially at federal/national levels) have either full bans or very stringent partial restrictions on generative AI. The motivations are clear – national security, privacy of citizens, and regulatory compliance. Governments also have unique considerations like preservation of records (e.g., concerns that AI-generated content or queries could complicate record-keeping and transparency requirements). Until robust governance frameworks for AI are established (some governments are actively drafting these), public sector entities will continue to default to a highly cautious approach, generally allowing AI only in controlled pilot projects if at all.


Technology Sector


It might seem counterintuitive, but even many leading tech companies – the very creators and power-users of AI – implemented restrictions on generative AI for their employees. The reasoning here is mostly about protecting intellectual property and sensitive code. Tech firms often have vast troves of proprietary data (source code, product roadmaps, client information in cloud services, etc.), and an accidental leak via an AI prompt could be disastrous competitively.

 

Apple is among several tech giants that internally banned or restricted ChatGPT and similar AI tools, citing the risk of confidential data leaks businessinsider.com. In May 2023, Apple told employees not to use ChatGPT or GitHub Copilot for work after sensitive code was found in AI training data.

Apple’s example is illuminating: in May 2023, Apple prohibited employees from using ChatGPT, GitHub Copilot, and other external AI after an internal review (reported by The Wall Street Journal) raised concerns that employees might divulge confidential information to these tools businessinsider.com.


This came on the heels of incidents at other companies notably, Samsung in South Korea had multiple instances where engineers input proprietary source code into ChatGPT (perhaps to get debugging help), and that data could then reside on OpenAI’s servers businessinsider.com.


Samsung swiftly banned generative AI use on company networks and devices to stop further leaks. Similarly, Amazon advised its developers not to share any Amazon code with ChatGPT and reportedly restricted its use, given that Amazon has its own competing AI initiatives and a trove of sensitive data. Google which has its own LLMs (Bard) informally discouraged use of competing AI like ChatGPT and had strict policies on internal use of any AI (employees were warned about data entry just as at other firms).


Other big names like Microsoft (an investor in OpenAI) allowed some internal ChatGPT use but through monitored channels, whereas Meta (Facebook) allegedly blocked access to external LLM tools as it was developing its own models (and to avoid leaks). Enterprise tech companies like Oracle and Verizon also banned or limited ChatGPT, with Verizon citing the need to protect customer data and network security zoomrx.com. Even Spotify and Walmart were reported to have placed restrictions on AI usage at various points, largely to ensure no sensitive business info was shared. In total, at least a dozen major tech-oriented companies publicly announced AI use restrictions in 2023 businessinsider.com, and many more did so behind closed doors.


The tech industry’s stance is nuanced. On one hand, these companies recognize the immense productivity and innovation potential of generative AI. In fact, tech firms are often building their own AI tools: for example, GitHub (owned by Microsoft) offers Copilot for code, and many companies encourage using internal AI-assisted coding tools. On the other hand, until they have AI solutions that guarantee data stays in-house, they are wary of employees using third-party services. This is why we see a trend of tech companies moving to enterprise subscriptions or self-hosted models. For instance, some companies that banned ChatGPT early have since adopted ChatGPT Enterprise, which promises not to use customer prompts for training and offers enterprise-grade encryption.


By late 2024, reports suggested over 80% of Fortune 500 enterprises were exploring or using ChatGPT Enterprise or similar LLMs in some capacity masterofcode.com – implying that tech firms might relax outright bans when a safer version is available.


Another driver in tech is the competitive aspect: AI models themselves are a competitive asset. Companies like Google, Meta, and OpenAI (with Microsoft) are in a race; thus, a company might ban an external AI partly to encourage use of its own platforms or to prevent inadvertently strengthening a rival’s AI with its data.


The technology sector initially responded to generative AI with heavy internal restrictions – emblematic cases being Apple and Samsung blocking ChatGPT – primarily to secure intellectual property and user data. Over 2024–2025, some of these firms are transitioning to a more permissive model under controlled conditions, such as using secured/enterprise AI solutions and embedding AI into their own products.


Tech companies understand AI well, so rather than an enduring blanket ban, we’re seeing them implement nuanced policies: e.g. “You can use AI, but only our approved AI tools or only on our private cloud.” The result is that tech sector employees often have access to powerful AI, but within a walled garden. Firms in this sector continue to refine their policies as both the threat landscape and AI capabilities evolve.


Key Drivers and Trends Shaping AI Access Policies


Several key forces are shaping corporate policies on LLM access in 2024–2025, cutting across regions and industries. Understanding these drivers helps explain why restrictions are in place and how they might change over time:


  • Data Privacy and Compliance: Protecting sensitive data is the number one driver of AI use restrictions. Companies must comply with privacy regulations (GDPR, CCPA, HIPAA, financial privacy laws, etc.), which often means they cannot legally upload personal or confidential data to external systems without safeguards. Generative AI tools create a gray area, is inputting data into ChatGPT a data transfer to a processor?


    Many legal teams treated it as such, thus forbidding use with any personal data. The GDPR’s hefty fines for mishandling EU residents’ data, for example, made EU companies quick to curtail AI use until compliance could be assured. Additionally, concerns over intellectual property rights fall under this umbrella: businesses worry that if proprietary data or code is fed into an AI, they might lose exclusive control or even violate export control laws (in defense sectors). Data residency requirements (keeping data within certain jurisdictions) also conflict with using global AI services. These compliance concerns create a strong incentive to ban or lock down AI until solutions (like on-premise LLMs or contractual agreements with AI providers) are in place. In Cisco’s survey, 92% of privacy professionals agreed that GenAI “requires new techniques to manage data and risk”, underscoring that existing compliance frameworks weren’t enough hrreporter.com. Companies are thus developing AI-specific data governance policies as part of their privacy programs.


  • Security and Cyber-Threats: Cybersecurity teams view unregulated AI apps as potential threats. One issue is that if employees paste internal information into a public AI, that information might be compromised (as discussed).


    Another angle is that generative AI tools themselves could be hijacked or used maliciously – for instance, an employee might retrieve code or documents from an AI that contain malware or biased content. In a late-2023 security survey, 83% of IT leaders voiced concern that unsecured generative AI apps pose a cybersecurity risk to their environment darkreading.com. Furthermore, threat actors can use tools like ChatGPT to generate convincing phishing emails or malware, which has led some companies to worry that allowing AI access could inadvertently aid attackers (this was more a concern for use of AI by malicious insiders or external attackers, rather than employees, but it factors into security postures).


    Some organizations have cited corporate reputation damage as a risk – e.g., if an AI were to produce inappropriate outputs that became public or if a company’s data leaked via AI, it would erode customer trust. Indeed, 91% of businesses in Cisco’s study said they need to do more to assure customers about responsible AI data use newsroom.cisco.com. All these security considerations push companies toward tight control of AI access, at least until they can deploy monitoring tools. We see startups and products emerging (so-called “AI firewalls” or gateways) which promise to monitor and secure AI usage;


"Adoption of those could allow more open usage in the future. But in 2024, security-first thinking has meant restrict first, then allow gradually". - Darren Wray Co-Founder Contextul

  • AI Governance and Regulations: Beyond existing privacy laws, the broader regulatory environment around AI is a major driver. Governments and standards bodies are beginning to formulate AI governance frameworks (e.g., the EU AI Act, US NIST AI Risk Management Framework, etc.).


    Companies are keeping a close eye on these. In anticipation, many have set up internal AI ethics committees or task forces to draft interim rules.


    According to one report, 90% of business leaders want government involvement in regulating AI, with 60% favoring mandatory regulations for AI use securitymagazine.com. This somewhat counterintuitive finding (businesses asking for regulation) stems from the desire for clear rules of the road, many firms prefer not to be the first to take risks with AI without guidance. In 2023, we saw industry associations publish guidelines (e.g., Britain’s ICO released guidance on LLMs and data protection).


    These nascent governance efforts cause companies to be cautious now so they won’t run afoul of likely upcoming rules. Also, sector-specific guidance influences policies: financial regulators in various countries issued warnings in 2023 about using AI without proper oversight, and healthcare regulators did similarly for patient data. Companies that are regulated tend to implement whatever the strictest guidance is, to be safe. All this results in a trend: corporations establishing AI use policies (“AI governance playbooks”) in 2024 in line with emerging best practices – typically meaning restrictions on use, risk assessments for any AI project, and top-level approval for exceptions. As these governance frameworks mature, they might enable more consistent and possibly more permissive use of AI, but for now they reinforce a controlled approach.


  • Productivity, Accuracy, and Quality Control: Interestingly, while boosting productivity is a key reason companies want to use AI, concerns about productivity and quality also motivate some restrictions. Managers have observed that generative AI can sometimes produce incorrect or low-quality outputs that employees might take at face value.


    In fields like law, finance, or medicine, an incorrect answer could be catastrophic. Thus, companies impose rules to ensure human oversight and verification of AI outputs (some have policies that AI-generated content must be reviewed by a subject matter expert before use).


    A number of companies expressed concern that easy AI answers might make employees less adept at critical thinking or original writing – e.g., a banker worried “we will lose creative and unique thinking” if people accept AI’s first answer americanbanker.com.


    There’s also the issue of time wasting or misuse: without guidelines, employees might use ChatGPT for non-work-related queries or over-rely on it for tasks they should do themselves, potentially impacting productivity negatively. However, it’s worth noting surveys still show more optimism about AI’s positive impact; the concerns about “diminished skills” are present but less quantifiable. Some firms have tackled this driver by encouraging use in certain areas (to enhance productivity) but banning it in others (where accuracy is paramount). For example, a company might allow AI to help in brainstorming or first drafts of marketing copy, but ban its use for any client-ready financial analysis. Quality control concerns thus shape fine-grained policy: they often underlie partial restrictions (allowed in some workflows, not in others) rather than full bans, unless the quality risk intersects with a compliance risk. In sum, to ensure high-quality outputs and preserve employee expertise, companies are imposing checks on how AI can be used, not just whether it can be used.


    Balancing Innovation with Risk (Future Trends): A clear emerging trend is the attempt to re-balance policies as organizations learn more about AI. Many companies that slapped on broad bans in 2023 did so as a knee-jerk risk response. By 2024, some realized this could hamper innovation or put them behind competitors. Indeed, despite restrictive instincts, over 80% of IT leaders still favor using generative AI for benefits like efficiency and innovationdarkreading.com. We are starting to see a nuanced, risk-based approach: companies maintaining strict rules on sensitive data, but beginning to experiment with AI in low-risk areas. The introduction of enterprise-grade AI offerings (like OpenAI’s business-focused services, Microsoft’s Azure OpenAI with data isolation, Anthropic’s Claude for enterprises, etc.) is a game changer. These tools address many data security concerns by keeping an organization’s prompts and outputs segregated.


    As a result, some firms that had bans are lifting them for these vetted tools. For example, several Wall Street banks that banned ChatGPT in early 2023 were reported in 2024 to be testing private LLMs internally, indicating a shift from “no AI” to “controlled AI”. The trajectory likely will follow past tech adoption curves (similar to how some companies once banned cloud storage or USB drives, then adopted secure versions).


    We can expect that as regulations clarify and trust in AI platforms grows, many current restrictions will be revised or relaxed. The trend is toward “trust but verify”, integrating AI with robust monitoring.


    Another trend is industry collaboration: companies are sharing frameworks for “responsible AI use” within industry consortia. In 2025, we anticipate more standardized approaches, possibly reducing how many firms feel the need for total blocks. Nonetheless, events like any major AI-related breach or highly publicized misuse could swing the pendulum back toward caution. So this balance is dynamic, with policies being constantly refined.


Overall, the key forces – legal compliance, security, governance standards, and productivity/quality considerations – all currently lean toward caution and control, which explains the widespread restrictions. But counter-forces – competitive pressure to innovate with AI and the gradual mitigation of risks through enterprise solutions – are encouraging a careful opening-up. The state of play in 2025 is a result of companies trying to reconcile these factors: harness AI’s value without incurring its risks.


Challenges and Gaps in the Data on AI Restrictions


It is important to note that quantifying how many companies have restricted LLMs comes with significant challenges and data gaps. The figures and trends reported above are the best available estimates, but they should be interpreted with caution due to the following issues:


  • Limited Public Disclosure: There is no global registry or requirement for companies to announce their AI usage policies. Many organizations implement restrictions quietly, via internal IT policy updates or memos, without making any public statement. We typically learn of corporate bans either through media reports (often leaks to journalists) or surveys where companies self-report policies. This can lead to undercounting or bias – e.g., smaller firms or those in regions with less media coverage might also have restrictions that simply weren’t reported. The data skews toward known cases (mostly large, high-profile companies) and those willing to respond to surveys.


  • Survey Methodology and Definitions: Different studies measure different things, which can be confusing. Some surveys ask “Have you banned generative AI?” and count yes/no (yielding the ~27% figure globally newsroom.cisco.com). Others include wording like “implementing or considering bans” (like BlackBerry’s 75% figure darkreading.com), which inflates the number by including tentative plans. There’s also variance in what constitutes a “ban” – e.g., does restricting ChatGPT but allowing another AI count as a ban? Does a temporary block pending policy count as a full ban? These nuances mean one survey’s “ban” might be another’s “partial restriction.” For instance, ExtraHop found 32% banned AI use while also noting only 46% had any AI policy at all securitymagazine.com.


    This implies some organizations without formal policy still had an informal ban, or respondents interpreted questions differently. Ambiguity in definitions makes it hard to pin down exact numbers. We have tried to use consistent categories (full ban vs partial control), but not all sources separate them cleanly.


  • Underreporting of Non-Compliance: Just because a company has a restriction doesn’t mean it’s effective. As noted, employees often circumvent rules. Thus an organization might report “Yes, we ban ChatGPT,” but in practice many employees still use it in secret. The survey that only 5% of companies with bans believed no one was using AI on the sly highlights this gap securitymagazine.com.


    Likewise, Kong’s finding that 60% of workers bypass AI rules means a lot of “banned” companies still have AI usage konghq.com. This complicates answering “how many companies have implemented restrictions” – many have implemented them on paper, but actual enforcement varies. Our focus has been on policies on the books, not whether they’re airtight in practice.


  • Dynamic, Fast-Changing Situation: The timeframe here is very tight. ChatGPT only came out in late 2022; by 2023, companies scrambled to respond. The data from 2024 may already be outdated by mid-2025 as companies adjust policies. For example, some firms that banned AI in 2023 could have reversed or eased those bans in 2024 after adopting safer AI tools – but those changes might not be captured unless a new survey is done. Conversely, some that allowed AI might have tightened policy after a scare. The “most recent data available in 2025” is somewhat fragmentary; we have the DeskTime study through end of 2024 techradar.com and a few early 2025 commentaries, but comprehensive 2025 surveys haven’t fully come out yet.


    Therefore, there is a data gap for 2025 proper, and we rely on late 2024 as a proxy. Trends suggest increasing adoption of controlled AI (so possibly fewer absolute full bans than in 2023), but we lack hard numbers for that yet.


  • Regional and Sector Gaps: While we provided regional and industry breakdowns qualitatively, precise data per region/sector is often lacking. Surveys like Cisco’s cover multiple countries and industries and report overall averages.


    We rarely get a cut like “X% of European companies vs Y% of Asian companies ban AI” in publicly available form. One exception was the ZoomRx survey focusing on pharma zoomrx.com, or the American Banker survey for finance americanbanker.com.


    Other sectors (e.g., legal firms, education institutions, etc.) might have their own patterns, but data is sparse or anecdotal. Thus, our analysis leans on combining different sources and news reports to fill these gaps, which introduces some uncertainty.


  • Bias and Credibility of Sources: It’s worth noting that some data comes from organizations with a potential bias. For instance, BlackBerry (a cybersecurity vendor) highlighting “75% considering bans” could be seen as emphasizing the need for their security solutions. We included multiple sources (Cisco, a respected tech firm’s study; academic/industry surveys; media reports) to balance this. Whenever possible, we cited primary research or well-regarded publications. Still, one should consider that not all companies would publicly admit to using or not using AI, depending on the image they want to project. Some might overstate bans to appear careful, or understate them to appear innovative. We assume the survey data is truthful, but these subtleties are a backdrop to interpreting the numbers.


In light of these challenges, our analysis combined quantitative survey data (for percentages) with qualitative insights and examples to present a comprehensive picture. The numbers (e.g., ~25–30% with full bans, ~60–70% with partial restrictions) are approximations that appear consistently across multiple sources newsroom.cisco.comnewsroom.cisco.com.


The real situation is complex: virtually no large company today has zero policy – most have something in place – but the strictness of those policies exists on a continuum. The data available give us strong confidence in the overall trend: a large majority of big companies worldwide have implemented at least some restrictions or guidelines on generative AI use (on the order of 80% or more konghq.com), and a notable minority (roughly one-quarter) have gone as far as completely blocking such tools for the time being newsroom.cisco.com. The exact counts will continue to evolve as more data emerges and as companies refine their stance.


Table: Adoption of LLM Usage Policies in Large Enterprises (2023–2024)

Study / Source (Date)

Scope (Respondents)

% with Full Ban on GenAI

% with Partial Restrictions

Cisco Data Privacy Benchmark (Jan 2024)

2,600 orgs, 12 countries (mixed industries). newsroom.cisco.com

27% banned AI tools (at least temporarily) newsroom.cisco.com

~63% limit data inputs; 61% restrict AI tools in usnewsroom.cisco.com (many have multiple controls)

ExtraHop “GenAI S

ecurity” Survey (Oct 2023)

~500 IT/security leaders (global) securitymagazine.com

32% banned employee use of generative AI securitymagazine.com

Only 46% had any AI usage policy at all securitymagazine.com (implying ~14% had non-ban policies; others ungoverned)

BlackBerry Survey (Aug 2023)

2,000 IT decision-makers (global) darkreading.com

(75% implementing or considering bans ) darkreading.com – note: includes planned bans, not just enacted

(Not explicitly quantified) – implied ~25% not considering bans. Many likely using interim guidelines.

American Banker – Finance Sector (Mar 2024)

Finance orgs (U.S.-centric) americanbanker.com

15% banned AI for all employees americanbanker.com 

(additional 15% had no usage at all, perhaps policy or lack of adoption)

20% allowed AI for limited roles only americanbanker.com 


26% had no ban yet but were considering a policy change.


Sources: Cisco 2024 Privacy Benchmark newsroom.cisco.comnewsroom.cisco.com; ExtraHop 2023 reportsecuritymagazine.comsecuritymagazine.com; BlackBerry/PrNewswire 2023 darkreading.com; American Banker 2024 (Arizent survey) americanbanker.com. Percentages are rounded. “Partial restrictions” encompass any formal limitations short of a total ban.


Conclusion


By 2025, the landscape of corporate generative AI use is one of cautious adoption under guarded conditions. The data shows that a majority of large companies worldwide have introduced restrictions – whether through strict bans (for roughly one-quarter of firms) or via detailed usage policies and technical controls (covering most of the rest) – to mitigate the risks of tools like ChatGPT newsroom.cisco.comnewsroom.cisco.com. These measures are especially pronounced in industries such as finance, healthcare, government, and tech, where data sensitivity and regulatory oversight demand vigilance. Regionally, organizations in North America and Europe have led in imposing guardrails (driven by privacy laws and security concerns), while Asia-Pacific companies present a mixed picture from very high AI adoption in some locales to stringent blocks in others techradar.combusinessinsider.com.


Crucially, these restrictions are not static. They reflect the nascent state of AI governance: firms are waiting for clearer rules and developing trust in enterprise AI solutions. Key trends indicate that some initial bans may soften as safer AI offerings and best practices emerge, allowing companies to reap AI’s benefits in a controlled manner. In the interim, however, the prevailing corporate stance is prudence. Data privacy fears, security risks, compliance uncertainty, and concerns over misinformation have understandably made organizations hit the brakes on unfettered AI access, even as they recognize the technology’s transformative potential darkreading.com.


As of 2025, the majority of large global companies have implemented restrictions

on generative AI access on work devices – with estimates ranging from about 25–30% instituting total bans newsroom.cisco.comsecuritymagazine.com, and another 50–60% enforcing partial or conditional use policies newsroom.cisco.com.


Only a small fraction have no controls at all. These numbers will likely continue to evolve, but they underscore a clear reality: generative AI in the workplace is being embraced under watchful eyes, with companies striving to strike the right balance between innovation and responsibility in this fast-moving domain.


Sources: Recent industry and security surveys and credible media reports were used to compile this analysis. Key references include Cisco’s 2024 Data Privacy Benchmark Study newsroom.cisco.comnewsroom.cisco.com, BlackBerry’s 2023 global poll on ChatGPT risks darkreading.com, an ExtraHop cybersecurity survey securitymagazine.com, sector-specific findings for pharma zoomrx.com and banking americanbanker.com, the DeskTime 2024 global AI adoption report techradar.comtechradar.com, and numerous reports of company-specific policies (e.g., Apple, Samsung, JPMorgan) zoomrx.combusinessinsider.com. These sources are cited in-line to support each factual claim made.

 
 
 

Comentários


©2025 Contextul Holdings Limited

bottom of page