Agentic Game Production with Unity
Course Overview
Games are traditionally hardcoded loops. Agentic Game Production transforms development into a living universe where characters and systems make autonomous decisions. In this course, you will learn the Agentic Steps and Setup used in the AMNIA Learning Portal Arcane Glossary Cloud Magic Technology's internal project and learning portal dedicated to advanced game development and AI integration. , mastering the Unity ML-Agents Toolkit Arcane Glossary Unity Machine Learning Agents. A toolkit that enables games and simulations to serve as environments for training intelligent agents. and the Orchestrator Pattern to build intelligent, adaptive game worlds.
⚡ Instant QuickStart: The AI Brain in 60s
Get an agent moving autonomously in Unity.
# 1. Install ML-Agents (Unity Editor)
# Go to Window > Package Manager > Add package by name: com.unity.ml-agents
# 2. Add the Brain Component
# Add 'Agent' and 'Behavior Parameters' components to your 3D cube.
Learning Objectives
- Design Goal-Oriented AI using Reinforcement Learning (RL).
- Master the Agentic Workflow: Integrating LLMs for procedural narration and level generation.
- Implement AMNIA-standard Arcane Glossary Cloud Magic Technology's internal project and learning portal dedicated to advanced game development and AI integration. Observation Layers (what the agent “sees” in 3D space).
- Automate Balance Testing using agents that play your game millions of times.
Prerequisite Rituals
Verify your circle before starting
Technical Deep Dive: The Reward Ritual
In Agentic Game Dev, you don’t code how an agent moves; you code why it should move.
- Observations: Data the brain receives (Raycasts, Velocity, Position).
- Actions: Forces the brain applies (Move Forward, Jump, Rotate).
- Rewards: The “Dopamine” of the AI. Positive rewards for reaching goals, negative for falling off edges.
Walkthrough: The “Infinite Gladiator” Agent
Step 1: The Observation Ritual
Tell the agent about its surroundings using CollectObservations.
public override void CollectObservations(VectorSensor sensor) {
sensor.AddObservation(this.transform.localPosition); // 3 values
sensor.AddObservation(target.localPosition); // 3 values
sensor.AddObservation(rBody.velocity.x); // 1 value
sensor.AddObservation(rBody.velocity.z); // 1 value
}
Step 2: The Action & Reward Bridge
Turn AI decisions into physical movements and reward success.
public override void OnActionReceived(ActionBuffers actionBuffers) {
// Convert AI thinking into 3D forces
Vector3 controlSignal = Vector3.zero;
controlSignal.x = actionBuffers.ContinuousActions[0];
controlSignal.z = actionBuffers.ContinuousActions[1];
rBody.AddForce(controlSignal * forceMultiplier);
// Reward: Distance check
float distanceToTarget = Vector3.Distance(this.transform.localPosition, target.localPosition);
if (distanceToTarget < 1.42f) {
SetReward(1.0f); // Massive success!
EndEpisode(); // Reset for next iteration
}
if (this.transform.localPosition.y < 0) {
EndEpisode(); // Failure
}
}
Step 3: Training the Soul
Use the command line to start the learning process.
mlagents-learn config/trainer_config.yaml --run-id=GladiatorTest_01
The Orchestrator Pattern: Complex Worlds
For a full game, one “Brain” isn’t enough.
- Micro-Agents: Handle specific tasks (e.g., “Navigating to a door”).
- The Orchestrator: A high-level C# script that monitors the game state and swaps between different trained AI models (Brains) depending on the situation.
Capstone Project: The Adaptive Enemy
Build a functional combat arena.
- Create an enemy Agent that uses Raycasts to find the player.
- Train the agent to “kite” the player—staying within range but avoiding getting hit.
- Implement a Balance Agent that plays through your level and reports the “Success Rate” based on different player health settings.
The world is no longer yours to script; it is yours to observe. Your game is alive.