AUTOPILOT: Yes/No
Protect us from what we want
Date: Thursday, October 2 and Friday, October 3, 2025
Venue: Hilton Hotel, Apollolaan 138, 1077 BG, Amsterdam

Overview
– How much agency should we entrust to algorithms?
– When do we engage autopilot, and when do we take back control?
– How do we balance efficiency and authenticity, automation and accountability?
We are living in an age of hyper-automation, where decisions once made by humans are increasingly delegated to intelligent systems. AI is no longer a futuristic buzzword, it’s becoming a cornerstone of modern business. As organizations adopt AI-driven tools, we must examine its impact on trust, control, and the essence of decision-making.
Why This Matters Now
– Are some decisions too important to leave to machines?
– How do we balance machine reasoning with human judgment?
– What does an Agentic Augmented Organization look like?
The rise of AI mirrors historic transformations during industrialization, reshaping power dynamics between individuals, organizations, and communities. Today, we face similar stakes as AI systems grow smarter, faster, and more pervasive. This isn’t just about technology—it’s about preserving humanity and maintaining meaningful human connections in a world increasingly dominated by automation.
Previous Executive Summits
Check out the videos, presentations and recordings from our previous summits.
The Discussion Ahead
1. Trust vs. Control: How can we ensure algorithms serve us and not the other way around?
2. Legal Agency of AI: Could intelligent systems gain legal personhood, and what would that mean?
3. Balancing Efficiency and Authenticity: How can organizations sustain human connections in a hyper-automated world?
Why You Should Attend
AUTOPILOT: Yes or No?
The choice is yours. Let’s explore the consequences together.
State of the Art sessions
To RPA, or Not to RPA: Agentic AI and the Next Wave of Automation
Bot scripts still click buttons, but agentic AI weighs options, argues constructively, and fixes ugly edge cases RPA can’t handle. In this rapid-fire showdown we map where old-school RPA still shines and where autonomous agents crush it on cost, speed, and resilience. Watch an agent swarm run through a real-world scenario—policy checks, exception handling, ethics rules—with minimal human intervention. Leave with a simple decision matrix and guardrails for swapping brittle bots with self-directed doers when it really matters.
Inside the Agentic Squad: Humans x AI Teaming
Code reviews at 2 a.m.? Your AI teammate already handled them—and filed the Jira tickets. Step into an agentic squad where product owners, engineers, and autonomous agents plan, code, test, and deploy side-by-side. We’ll demo how bots mine backlogs for hidden dependencies, refactor legacy services to cloud-native, and guardrail every merge with self-tuning quality gates. Expect battle-tested patterns for spinning up AI-augmented feature teams, measuring uplift, and keeping human creativity in the driver’s seat while the agents grind through the grunt work.
Trust in Motion: Generative / Agentic AI for Testing the “Untestable”
Why write brittle test cases when agents generate near-perfect ones—faster, cheaper, and with sharper intent? In this session, we show how QA teams are shifting from static scripts to agentic workflows, using Gen AI not as a tool, but as a teammate. Requirements, test cases, and defects flow through chains of smart agents—boosting clarity, coverage, and delivery speed. Then comes the twist: once AI is building your software, how do you test? We share how “tester-agents” detect drift, surface confabulations, flag prompt hacks, and trigger synthetic user flows—scoring outputs with probabilistic oracles and real-time risk meters. All wired into your CI/CD, all happening in motion. Walk away with a playbook—patterns, guardrails, and metrics—to turn stochastic chaos into board-level confidence.
Speakers

Moran Cerf

In his recent work, he helps leaders (namely, the U.S. government) implement key lessons from decision science and neuroscience in critical choices (i.e., the nuclear launch protocols).
Prior, Prof. Cerf’s main work involved studying patients implanted with neural devices during brain surgery to decode decisions and dreams. His work gave rise to some contemporary advances in neuroscience and applications (i.e., advances in brain-machine interfaces).
Cerf spent a decade working in the Israeli cybersecurity space as a hacker and had an extensive career in the tech industry.
Cerf published papers in academic journals such as Nature and Science, as well as popular science journals such as Scientific American Mind, Wired, New Scientist, and more. He has published several books, including the recent: “Brain Imaging: An Illustrated Guide to the Future of Neuroscience”, and his research has been portrayed in numerous media and cultural outlets (BBC, Bloomberg, NPR, Time, CNN, Fox, Netflix Explained, NY Times, and dozens of others). He has been featured in venues such as the Venice Art Biennial and China’s Art, Science and Technology Association, and has contributed to magazines such as Forbes, The Atlantic, Inc, and others. He has made much of his research accessible to the public via his public talks at PopTech, TED, TEDx (“most TEDx talks”, with 13 TED/TEDx talks), Google Zeitgeist, DLD, etc., gathering millions of views and large following.

James McQuivey

Leading in times of “Scarcity in Abundance”

Andrew Keen

Reserve Your Spot Now
Join us in defining the future of decision-making and innovation at the Sogeti Executive Summit.
Thank you for your registration, please check your inbox for the confirmation email.
If you have any questions, please contact

Margo Langeweg
Executive Event Planner, Sogeti
Phone: +31622546440