Thinking about Forcepoint

I went to work for Forcepoint in 2020 and spent just a few weeks under 3 years there. I really liked the products and the people I worked with in the Technical Learning Services group. They were a smart, capable, and passionate bunch. I thought the company had a good vision for where it wanted to be in the market. I always had doubt about the company’s leadership’s ability to execute on that vision, though. That skepticism was repeatedly enforced by changes in senior leadership and sudden shifts in strategy. In fact, the month that I (and my manager and much of my team) were laid off, the very work I was doing was called out in a company meeting as essential to the company’s plans for the next year. I wasn’t called out by name, but the very specific training and enablement work I was doing and the work some of my teammates were doing was. In a short while, I went from “man, they really see how important this is to adoption” to “WTF?”. In the next year or so, many of the colleagues who hadn’t been laid off had left for competitors, both startups and established ones.

Every once in a while, I feel like taking a look at whether Forcepoint has been able to execute upon that vision or if it’s even a similar vision. Looking at where we are now, the contrast against the 2023–2024 period is stark. Back then, the industry was undergoing a massive consolidation. Analysts like Gartner were essentially forcing the market into the Single-Vendor SASE box. If you weren’t “converged,” you were legacy.

The 2023–2024 Hype Cycle vs. Reality

In 2023, the buzz was all about the “Death of the Point Product.” The 2023 Gartner® Magic Quadrant™ for Single-Vendor SASE had everyone scrambling to prove they could handle both the networking and the security in one go. At the time, Forcepoint was labeled a Visionary.

Checking back in now, I’m looking to see how they survived that transition. By the time the 2024 Gartner SASE analysis rolled around, the market began to consolidate around “Leaders” like Palo Alto and Zscaler, while Forcepoint shifted toward the Niche Player category. The analysts were right: having deep, complex DLP is great, but if the customer experience and adoption lag, it’s a hard climb against the platform giants.


Risk-Adaptive Protection in the Age of Agentic AI

In late 2023, the pitch was: “Trust the platform and its behavioral analytics to dynamically adjust the friction.” It could be a tough sell. Security teams were, and still are, notoriously allergic to “black box” automation that might block a valid transactions by important personnel or clients based on an “Indicator of Behavior.”

Fast forward to 2026. The market has moved from simple human users to Agentic AI—autonomous bots moving data between SaaS apps without a human ever clicking “send.” This makes the 2023 versions of “User Behavior Analytics” look almost quaint. Forcepoint has pivoted to what they’re calling AI-native Data Security, integrating DSPM (Data Security Posture Management) to find “Dark Data” in AI pipelines.

I don’t have much doubt that they’re capable of delivering the product. I’m still skeptical about their ability to catch fire in the market. Do they know how to sell it? Do they know how to get the customers trained and supported after they do?


The Specialist’s Dilemma

The Forrester Wave™: Data Security Platforms, Q1 2023 highlighted a shift toward “holistic” data suites. Forcepoint was always a specialist here, but specialists often get eaten by generalists.

Checking the Forrester Wave™ for Q1 2025, they remain a Strong Performer, receiving top scores in Data Classification and DLP. Forrester’s note was telling: “Organizations requiring mature DLP… should consider Forcepoint.” It’s a polite way of saying they are still the best at the “old-school” heavy lifting, even if they aren’t leading the pack in the new platform-consolidated world.

I keep an eye on them mostly because I worked there and I really thought they were doing some cool things with the technology while I did. I am aware that they also represent a specific philosophy: that the data context may be the only thing that really matters. After all, what it is that bad actors are looking for when they breach an organization’s defenses? It’s data, whether they plan to use that data for nefarious purposes or they just want to hold it for ransom. Forcepoint’s platform matters and I sometimes wish I still had the insider view of it that I had when I worked there. Forcepoint’s current stance and position in the market is a litmus test for both whether a refined, data-centric approach can still win in a world where AI is just a new way to lose data and for whether that company’s leadership can articulate, sell the value of, and support that approach in a competitive market.


2026 Data Loss Prevention Course Idea

This is just an exercise to keep my own skills alive while I’m not in a training and enablement role. This is a course in Data Loss Prevention or Data Posture Security Management created using publicly available materials such as YouTube videos, industry analyst reports, and vendor provided information. While I did work for Forcepoint as a Learning Program Manager from 2020 to 2023, I haven’t been focused on cybersecurity work since then. This hasn’t been vetted by any of my former colleagues or any cybersecurity SMEs as I would do with a real course.

DLP 2026: Autonomous Data Security

Strategic Training for the Era of Agentic AI and Cloud Governance

Hour 1

Module 1: The Modern Threat Landscape

1.1 The Death of the Perimeter

In 2026, the traditional network perimeter is dead. Data is “liquid,” flowing through autonomous agents, personal LLM prompts, and hybrid cloud silos. We must move from guarding gates to protecting the Data Identity. Historically, we guarded the network gate; today, the gate is gone. We must protect the data itself, regardless of where it travels. We call this Data-Centric Security.

1.2 Shadow AI & Agentic Risk

Understanding “Agentic AI”—autonomous systems that execute tools (APIs, databases) rather than just chatting. Shadow AI occurs when employees “paste” sensitive IP into unvetted AI tools. In 2026, this volume has increased 6x since 2023. Agentic Risk involves AI agents (like Microsoft 365 Copilot) that have permission to read everything and might inadvertently leak it to the wrong person during a chat.

1.3 Case Study: The “Accidental” Insider

We focus on the “tired employee,” not just the “hacker.” 90% of data loss in 2026 is an “Accidental Insider.” This requires Risk-Adaptive Protection—security that changes based on user behavior scores.

Video Resource ▶ Watch: Microsoft Purview Foundations — Securing the Data Estate
Quick Quiz 1

Why is “Shadow AI” the primary leak vector in 2026?

Data entered into public LLMs becomes part of an unmanaged training corpus.
It significantly increases local hardware electricity costs.
Most AI tools automatically delete corporate data after 24 hours.
Hour 2

Module 2: Discovery & Classification 2.0

2.1 Finding “Dark Data” via DSPM

You cannot protect what you cannot see. Most companies have “Dark Data”—files sitting in old cloud buckets that no one remembers. Data Security Posture Management (DSPM) tools like Cyera and Wiz provide “agentless” discovery. They connect via API to map sensitive PII across AWS, Azure, and Snowflake in minutes.

2.2 Semantic AI vs. Legacy Regex

Legacy DLP (2020) used pattern matching (e.g., 16 digits for a credit card). 2026 DLP uses Semantic AI to understand intent and meaning. It knows a document describes a trade secret even if specific patterns aren’t present.

2.3 Data Lineage: The Provenance Powerhouse

Tracking the “DNA” of a file. Cyberhaven and Netskope One utilize Data Lineage to watch the data’s journey. If data starts in a “Top Secret” record and is copied into a new file, the new file inherits the “Top Secret” protection automatically.

Vendor Demo ▶ Watch: Cyera Multi-Cloud Discovery — Finding Dark Data
Quick Quiz 2

What is the main benefit of Data Lineage over static sensitivity labels?

It tracks the provenance of the data content itself, ensuring protections persist regardless of format.
It prevents users from taking screenshots of sensitive data.
It automatically encrypts the physical server hardware.
Hour 3

Module 3: Mechanisms of Control

3.1 The “AI Firewall” & Real-Time Redaction

Intercepting the “Prompt” and cleaning it before the AI processes it. Tools like Nightfall AI act as a proxy. When a user clicks “Submit” on an AI prompt, the tool redacts PII or secrets in milliseconds, cleaning the data “in-flight.”

3.2 Risk-Adaptive Protection (RAP)

Security is no longer binary. Forcepoint and CrowdStrike assign users a dynamic “Risk Score.” If behavior becomes suspicious—such as a user visiting job boards then downloading unusual volumes of data—the system escalates from “Audit” to “Block” automatically.

3.3 Cloud-Native Enforcement

Securing the “SaaS Side-Door.” Tools like Wiz monitor app-to-app integrations to detect when an unvetted third-party app has gained access to sensitive repositories through an over-privileged OAuth token.

Technical Tutorial ▶ Watch: Nightfall AI — Redaction Policies for GenAI
Quick Quiz 3

What is the primary role of an AI Firewall?

It redacts sensitive entities (PII, secrets) from prompts before they are received by an LLM.
It blocks all internet access on a corporate laptop.
It makes AI responses generated by LLMs 100% accurate.
Hour 4

Module 4: Governance, Ethics & Response

4.1 Global Regulations: EU AI Act & GDPR 2.0

Compliance in 2026 is a living system. You must provide an “audit trail” for AI training data. Varonis and OneTrust automate this by mapping data sovereignty—ensuring specific citizen data stays on local servers.

4.2 The “Coaching” Framework

Security as an Educator, not a Gatekeeper. We move from “Hard Blocks” to Adaptive Coaching. Instead of “Access Denied,” we use prompts like: “You are sharing financials with an unmanaged AI. Would you like to use our Secure Copilot instead?”

4.3 Surgical Remediation

When a leak occurs, we no longer “shut down the system.” Using Microsoft Purview, security teams can perform Surgical Remediation—revoking access to a single leaked file across all shared instances (Email, Teams, OneDrive) globally with one click.

Capstone Case Study ▶ Watch: Microsoft Purview — 2026 Incident Response Simulation
Quick Quiz 4

How does “Adaptive Coaching” improve security culture?

It offers users safe alternatives instead of simply blocking their work.
It identifies which employees should be fired for making mistakes.

Vocabulary Flashcards

Click a card to reveal the 2026 definition.

Dark Data
Sensitive data residing in forgotten or unmanaged cloud silos (AWS, Azure buckets).
Data Lineage
Tracking the ‘DNA’ of data so protection follows it even if content is copied/pasted.
Prompt Redaction
The real-time cleaning of PII/Secrets from an AI prompt before it reaches the LLM.
DSPM
Data Security Posture Management: Continuous agentless discovery of cloud data risks.

Tool-to-Task Match

Match the security tool to its primary functional strength.

Cyberhaven
Nightfall AI
Cyera