AI security today requires human oversight, agent identity, and operational isolation. Amazon demonstrates that, with increasingly autonomous AI, companies must reduce response times to a few seconds, track every action of the agents, and implement human authorization systems to prevent risks and attacks.
Summary
The New Paradigm of AI Security
At the HUMAN X Conference, Steve Schmidt explained a crucial point: AI security is no longer a theoretical issue, but an operational and urgent one.
The most tangible evolution is this:
- less experienced attackers become more effective thanks to AI
- advanced actors scale attacks on multiple fronts simultaneously
- response time shifts from hours to minutes or seconds
In summary: AI accelerates everything — attacks, defense, complexity.
This means that companies can no longer rely on traditional security models.
How AI is Changing the Threat Landscape
What Does It Really Mean?
AI democratizes offensive capabilities.
Previously, there was a clear hierarchy:
- “script kiddies” with limited skills
- highly sophisticated state actors
Today this difference is narrowing.
Concrete Impact
- More frequent and large-scale attacks
- Larger attack surface
- Need for near real-time response
The most important thing is: it’s not just the quality of the attacks that is increasing, but also the speed and quantity.
The Dual Risk of AI Security
Question: What is more dangerous?
AI Attacks or Internal Use of AI?
Answer: both
According to Schmidt, the risk is twofold:
- More Powerful Attackers Thanks to AI
- New Internal Risks (shadow AI)
Many employees use AI tools without central control, creating:
- unmonitored access
- exposure of sensitive data
- loss of governance
This phenomenon is known as shadow AI.
Agent Identity: The Foundation of AI Security
Key Definition
An AI agent is an autonomous software entity that acts on behalf of a user or system.
Amazon has introduced an innovative concept:
assigning a unique identity to AI agents
Why It Is Fundamental
- Full traceability of actions
- Audit and forensics
- Regulatory compliance
In practice:
“This human caused this agent to perform this action on this data.”
Strategic Implication
Companies must treat agents as:
- privileged users
- auditable systems
- regulated entities
Containerization: Controlling AI Instead of Trusting It
Problem
An AI agent with full system access can:
- read all data
- modify infrastructures
- cause critical damage
Amazon Solution
Agents must operate in isolated environments (containers or VMs)
How it works
- every action that “exits” the container is tracked
- permissions are temporary and limited
- requests are validated by other models
In Summary
Modern AI security is based on isolation + observability.
Guardrail and Risk of Manipulation
What are guardrails?
Guardrails are rules and constraints that guide the behavior of AI agents.
The Problem
If compromised, they can:
- push the agent beyond its limits
- cause harmful actions
- become an attack vector
👉 This is a new type of attack surface.
Strategic Insight
Companies must:
- protect guardrails as critical assets
- ensure integrity and tamper-proofing
- continuously validate the behavior of agents
Human-in-the-loop: Why Human Oversight Remains Central
Definition
Human-in-the-loop means that a human must approve critical actions executed by AI systems.
Amazon Case
“Contingent” authorization system:
- 2 people must approve sensitive transactions
- hardware authentication (FIDO2)
- automatic block without approval
Application to AI
The same principle is applied to agents:
- external checkpoints to the agent
- independent validation
- inability to self-authorize
The most important thing is: AI must never have full control over critical decisions.
AI Security for Startups: Immediate Actions to Take
Question: Where to start?
Operational Response
According to Schmidt, the priorities are:
1. Agents Inventory
- what agents exist
- where they are installed
- what they do
2. Data Verification
- where they reside
- who uses them
- how they are transferred
3. Data labeling
- sensitive data vs non-sensitive data
- classification from the start
This is crucial because:
labeling data afterward is extremely costly
4. Isolation of Agents
- container or VM
- limited access
- auditability
The Real Mistake to Avoid
Many companies think:
“We need to block AI”
Schmidt is clear:
this approach is wrong
The Correct Strategy
- accept the use of AI
- make it visible
- monitor it
- govern it
In summary:
do not stop AI, but make it safe.
Future Trend: AI Security = Continuous Feedback
A key insight emerged from the HUMAN X Conference:
👉 security is no longer “a posteriori”
New Model
- immediate feedback
- real-time validation
- continuous improvement
Each action becomes:
- security input
- data for training
- opportunities for improvement
Strategic Implications for Companies
1. Security Becomes Distributed
It’s no longer just the CISO’s responsibility:
- each team must be accountable
2. AI Increases Operational Complexity
It serves:
- visibility
- governance
- automation
3. Human Oversight Remains Central
Despite the growing autonomy:
- critical decisions must remain human
FAQ – AI Security
1. What is AI security?
AI security is the set of practices, technologies, and processes that protect artificial intelligence-based systems from abuse, attacks, and operational errors.
2. Why does AI increase security risks?
Because it amplifies attackers’ capabilities, increases the speed of attacks, and introduces new internal vectors such as autonomous agents and shadow AI.
3. How can AI agents be controlled?
Through:
- unique identities
- containerization
- audit of shares
- human-in-the-loop
4. What is the priority for companies today?
Understand:
- which agents they are using
- where the data is located
- what types of access exist
5. Should startups have a CISO?
Not necessarily. It is more important that everyone in the company is responsible for security, especially in handling sensitive data.
Conclusion
AI security is no longer optional.
It is a system that combines:
- technology (container, identity, audit)
- processes (authorizations, governance)
- culture (distributed responsibility)
👉 This means that the future is not “AI vs human”, but:
AI under human control.

