Saturday, December 21, 2013

Agent Movement: Movement Manager

Surprise! I return from my deep slumber to bring you another edition of Agents of Video Games. These last two weeks were, to put it modestly, hectic. I had final projects and tests out the wazoo. And my wazoo still hurts.

If I remember correctly, the last thing we talked about was pursuit and evasion, which are smart extensions to the more simple seek and flee behaviors. Just slap these movement behaviors onto agents that you want to be either "melee" or "prey" type creatures. They work quite well for that.

This week, I bring to you what I believe to be the culmination of gamedevtuts+ tutorial series.

Agent Movement/Steering: Movement Manager

First off, take a look at the original post. I always say that the original authors do the best job explaining these topics, and this week is definitely no exception.

Second off, try out my code from my most glorious Github. Make sure that it works. You know, the usual.

The Movement Manager is the thing that brings all of these basic movements together. It can combine several different movements, such as seeking in one direction and wandering in another, together so that a single steering vector can be added to the current velocity. Further, instead of having to code out each individual type of movement, the Movement Manager will take care of it for you.

When coding an agent that uses the Movement Manager, there are very few things that need to be done:
  • Add logic for deciding what to do.
  • Make calls to seek, flee, etc.
  • Update the Movement Manager.
The code up on the Github gives an appropriate demonstration of this. All that the coder has to do is decide that to move towards/away from.

This system is set up exactly like I always like to code things. It's compartmentalized, which means I can move where I want when I want. I can't really say much more than that, really... Try it out yourself!

Saturday, November 30, 2013

Agent Movement: Pursuing and Evading

Aloha, readers! Welcome back to Agents of Video Games!

Last week we added a new complex behavior to our arsenal called wandering. It's pretty cool, and can be used along with other behaviors to make AI's seem more random than they would otherwise be with just seeking or fleeing behavior alone. Go ahead and check that post out here if you haven't read it already.

Now that we have that little bit of business out of the way, ahem: BEHOLD! THE GREAT TOPIC AWAITS:

Agent Movement/Steering: Pursuing and Evading Behaviors

You can, and should, read the original article here.

Agents that pursue or evade are more responsive and accurate at catching a target than agents that seek or flee. This is because they steer towards a point that is slightly in front of their target, which allows for them to be predictive of where the target with be in the future. As you might remember from your days playing tag at recess, this is pretty much what real children do automatically. They look to where the target will be in the future, then they either seek or flee that future position.

Like wandering, pursuing and evading behaviors make a few calculations before applying the simple seek or flee movement. In this case, they calculate where the target agent will be after T game updates. The value for T can either be fixed or fluid, but it is better to be fluid, as I'll explain in a bit. To get this future position, simply add the target position to the target velocity multiplied by T.

A pursuing agent will seek towards this future position, and an evading agent will flee from this future position. It's a simple yet effective method of creating responsive agent movement towards a moving target. I would definitely use this for agents that need to be an annoyance in either direction because players would usually like to feel that the AI of a game is effective and challenging.

If an agent uses a fixed T value, then it can be made even more effective by making T dependent on the distance between the target position and the agent's position. The agent does not need to predict where the target will be when they are close, so as the distance between the two decreases, the number of updates T should also decrease.

This is something that I wish our team had thought about when programming Dot Wars, specifically for the game-type Survival. True, the Suicide Dot's AI is already pretty effective, but adding a pursuing behavior like this to them would have helped convince the player that they were under more of a threat.

Go ahead and take a look at the Github that has the new updated code for pursuit and evasion. You may notice that if you spawn a pursue agent and an evade agent, they both move in exactly the same way and never catch each other. Do not be alarmed by this; technically, that means that the base code is working just as it should be!

Saturday, November 23, 2013

Agent Movement: Wandering

Hello everybody!

Hopefully by now you all have skipped ahead of my painstakingly slow updates and just read all of gamedevtuts+'s series on agent movement. That way, you know, you might have actually gotten to the good part of the series with all the interesting movement types! We've already covered seeking, fleeing, and arriving behaviors and how they relate to the overarching steering principles. Now, esta semana, we cover the magnificent:

Agent Movement/Steering: Wandering Behaviors

Wandering is like the seeking and fleeing behaviors that we first studied because all that it does is it calculates a steering vector. In this case, the steering vector points to somewhere on the perimeter of a circle a certain distance away. This point is shifted every frame, so that when the calculated steering vector is added to the velocity, the steering vector is not so different from the actual velocity. This results in a fairly smooth side to side wandering motion.

The steering vector first needs a circle to be "drawn" mathematically a certain distance away from the agent in the direction that it is facing. The farther out that this distance is from the agent, the stronger the wandering force will be. The same goes for the radius of this circle.

Once the circle is constructed, the agent needs to determine where to point to. This is where the wander angle comes in. This value is persistent from frame to frame, and is slightly modified each time to always make the agent change directions. Using the wander angle and the circle, it's easy to use maths and stuff to find a new point to gravitate towards.

Again, the article does a good job explaining the exact mathematics, as does my Github repository if you're the type that likes to read code.

This method of creating wandering behavior is really nice. I know I say that about pretty much all of these methods so far, but I gotta say it:

It's super nice.

I love adding randomness to my AI's whenever possible/practical, and this code is a great way to get it done. The best part about it is that I can stack this wandering on top of other behaviors, like seeking or fleeing to create more complex behavior. It's brilliant.

Go ahead and run my code and you'll see what I mean. Mess around with the constants and see what sort of cool things you can get the agents to do. Maybe even try combining wandering with some of our other movements! You probably won't regret it.

Saturday, November 16, 2013

Agent Movement: Fleeing and Arriving

Welcome back, my loyal follower(s). Take a seat around the campfire.

It's cozy here.

This week, I'll continue talking about simple agent movement. As you may remember from last post, a good way to improve the movement of your AI is to switch from a simple discrete movement pattern to a seeking pattern. You should go check that out if you haven't already.

Today, we will take a look at the next tutorial in the gamedevtuts+ series on steering behaviors which has two sections:

Agent Movement/Steering: Fleeing and Arrival Behaviors

I'll talk about each section separately because they both showcase different aspects of fluid agent movement. As before, I have my own implementation in my brand-new super-secret public Github, so go check that out, too!

Fleeing is basically the same thing as seeking. This is nice, because it uses almost the same code, so... yeah, I copied and pasted it from one to the other. Not going to lie. The difference, due to vector mathematics, is simply changing the way that you calculate the desired vector.

With a seeking agent, we had it calculate its desired vector by subtracting its position from its target position. Magically, if we subtract its target position from its position, we get fleeing behavior! It's really that simple, and my code demonstrates this nicely.

Fortunately for us, arrival behavior is also easy to add. It's not the same as fleeing or seeking behavior, though. It doesn't calculate a new steering vector for the agent to use. Instead, the code limits the desired velocity when the agent is within a certain distance from the target position. This limit scales with how close the agent is to the target position linearly, which makes it appear that the agent intelligently halts itself right on top of the target position.

This arrival behavior is the first step towards demonstrating some complex additives to the basic seek behavior. In the future, we will take a look at some more elegant ways of mixing and matching movements using a little thing called the movement manager. But more on that later! For now, just go make sure these two new behaviors work for you!

Today's post was a short one, I understand, but it's not too surprising when you think about how simple each new behavior is compared to the initial seek behavior. Next time though, things will get much weirder when we start to look at wandering behavior.

Friday, November 8, 2013

Agent Movement: Seeking

You have returned! Unless this is your first week, in which case... hi.

Last week, I introduced you to a set of tutorials about basic AI movement. Hopefully you took the time to look at them yourself so you can see how awesome they are. If not... well, it's too late for you. You are doomed.

Now we will examine the first tutorial in the series. You can find it here.

Agent Movement/Steering: Seeking Behaviors

Seeking is the most basic fluid movement that this series looks at. By seeking instead of simply moving in a direction, you can make agents move more realistically. From the top-down gaming perspective of this tutorial series, fluid movements in any direction are pretty important.

The article first highlights the importance of designing a steering-based movement system by showing you what it looks like when you don't use seeking movement. It causes the agent to move in discrete chunks rather than smooth curves. Take a look at the cute little Flash demo they've got there to see what I'm talking about.

As far as how to code in this seeking behavior, it's pretty simple. Simply code up a basic "Agent" class with a few attributes:
  • Position: Coordinates on screen.
  • Velocity: The change in position for a unit time.
  • Mass: Used for regulating the effect of steering on the agent.
This is just the base attributes for an agent class. By setting the velocity to be non-zero, you can create the boring, simple movement described before by adding the velocity to the position every update call. To achieve the fluid seek movement that we want, we need to calculate a steering vector. Within an update call, calculate the following:

  • Desired Velocity: The direction you want the agent to move. In the tutorial, they move towards the mouse cursor.
  • Steering Vector: The vector to be added to the current velocity to move it towards the desired velocity. In a seeking movement, this is equal to the desired velocity minus the velocity.
It's really that easy. The tutorial itself has all the code you need to get this to work aside from the underlying rendering and agent management code.

I have re-created the seek movement using good 'ol trustworthy XNA. It's what we used to make Dot Wars, so, I mean... it technically works. If you'd like to take a look at the code, just visit my brand spanking new Github for it.

I was going to take screenshots, but then you can't see what's going on too well... And my screen recorder was wack. So just run the code yourself.

I must confess that it was hard for me to get this method at first. Explicitly changing the velocity felt vulnerable to me. When we coded the movement for Dot Wars, all changes to the velocity were done by adding accelerations to the dots instead. This allowed us to move the dots around as well as apply outside forces onto them. It was kind of nice and Newtonian.

So I first tried to translate this system into an entirely acceleration-based movement system. It didn't work. These steering techniques manipulate the velocity directly in a lot less code than the acceleration technique. In order to get the same effect as truncating the velocity using accelerations, I'd have to use dragging and boosting forces that cancel each other out at a certain max speed, which isn't really what I wanted to do. We did it for Dot Wars, but even then it was not the best way to handle the movements.

I eventually realized that this method of moving the agent worked better, looked better, and could still have outside forces acting upon it so long as the accelerations were applied after the movement velocity was calculated. I think I'd like to look at that eventually. Maybe after I finish this series.

It turns out that seeking is a pretty easy behavior to implement. On top of that, it looks good. It helps the player believe that the agents are actually steering around the world, which, like in the case of Dot Wars, is really good at making the player feel like they are battling worthy AI opponents.

Granted, this is only a small part of the final agent behavior, but when designing AI's and, really, anything in games, it's important to keep track of details like movement because only by doing so can you design the best game possible.

Friday, November 1, 2013

Agent Movement: Introduction

You came back. Awesome. That means that you get to learn about:

Agent Movement/Steering

Ain't it cool? It's a nice, clean way of introducing the topic. I'll probably use it every time I introduce something new. Keep me accountable.

When I say, "Agent Movement/Steering," I am referring to the behavior that gives AI's the ability to move around. This movement can then refer to several different types or dimensions of movement that are defined by the axes involved. For example, a platformer needs to add movement in the left, right, up, and down directions, as well as account for gravity towards the bottom of the screen. This is different from, say, how a first person shooter handles movement.  In an FPS, the player can move left, right, forward, backward, up, and down. Platformer agents move differently because the affected axes are different. When coding AI for a specific set of axes of movement, you need to understand these axes when deciding how to move the agent around the world.

In the Gamedevtuts+ tutorial on Understanding Steering Behaviors, the manipulated axes are the left, right, forward, and backward axes. In other words, the steering techniques apply to the top-down perspective. This is similar to many classic game perspectives like in Asteroids, as well as in new games like, uh, Dot Wars. Because there are two axes to manipulate, you can easily define movement in this space with a two dimensional vector that represents an agent's velocity.

So why steer an agent instead of, say, make explicit changes in direction? What does it mean to steer?

An agent that steers looks more fluid than an agent that moves around the screen discretely. Its movements are decided with a two (or three, it's whatever) dimensional velocity. This closer mimics real physics than discrete "Go up. Now STOP!" movement. This is why it is a good idea to understand steering behavior for use with AI's; it's often nice to make your agents move as naturally as possible.

The series Understanding Steering Behaviors explains basic steering behavior pretty well. You should go check it out. I'll wait.

If you decided that you were going to go ahead and read the series later, I'll give you a quick rundown here. Basic steering patterns can be combined together to create more complex patterns.

Basics:

  • Seek: Go towards a point in a realistic way
  • Flee: Go away from a point in a realistic way
These two movements are pretty self explanatory. Seek moves towards a point, where flee moves away from a point. However, instead of simply moving discretely towards/away from a point, an agent that seeks or flees manipulates its velocity smoothly. This creates a curved movement path, instead of a jagged movement path, which looks much better to a player.

These two basic movements can be augmented to give some new behavior. In fact, really every complex movement behavior is based on added to these two movements:

Augmented:
  • Wander: Move aimlessly in a realistic way
  • Evade: Avoid a target in a realistic way
  • Pursue: Chase a target in a realistic way
Just read about these. They're awesome.


These basic and augmented movements can then be combined in a "Movement Manager." The movement manager allows for you to apply several different steering behaviors at once to create complex, compound movements.

Compound Movements:

  • Collision Avoidance: Slide around obstacles while moving in other ways.
  • Path Following: Define a path for agents
  • Leader Following: Create subordinate, minion-like agents
Over the next few weeks, I'll go into more depth about each Gamedevtuts+ tutorial. Go ahead and acquaint yourself with the first one about basic seeking so that my analysis makes more sense. I really like the series because it does a good job of compartmentalizing agent movement, so maybe you can like it, too!

It's really super cool, I promise.

Friday, October 25, 2013

"What is going on?" An introduction to Agents of Video Games

Hello.

Now that I have your attention, you are hooked. You have no choice but to read my blog. Not that you really had anything better to do... You were probably just going to browse Reddit for a few hours anyway.

You've stumbled upon my little blog about artificial intelligence (AI) which is written by me, the "Dan Man." Totally my real name.

Not really my real name. That was a joke. It was pretty funny. My real name is Daniel Pumford, and I'm a third year computer science student at the University of Arkansas in Fayetteville, Arkansas. I also happen to be a game programmer at a really cool video game company that my friends and I started called Emberware. We're working on getting our first game up on the Xbox Live Indie Games store, so... keep checking up on that. Shameless plug.

While working on the game (Dot Wars), I was looking around the internet for some good resources for learning about AI in video games. There was the occasional article about agent movement or dungeon pathfinding or something else cool, but I didn't notice a blog specific to basic AI programming. Such a blog would have been fairly helpful for me back three years ago when I was figuring out the basics of the craft.

So I started this blog to act as an aggregation for artificial intelligence tutorials and articles. Instead of having to look all over the internet to find interesting AI information, you (the reader), can just visit my blog instead! This can help new or experienced game programmers learn more about agent design in video games and at a faster pace. In addition to providing links and analysis of other writers' articles, I will also work to provide some good, quality articles of my own making. This can help me understand AI development a little better, while maybe helping you learn a little, too.

When writing about outside articles, I will either try to implement the content of the articles or analyse the methods for merit. Of course, I could do both or neither or something completely different, but you can be sure to expect something interesting to read. It may even be thoughtful, if I'm not too, well... bad at writing.

In addition to those articles, I'm really looking forward to writing a couple articles of my own! While working on Dot Wars at Emberware, I worked on both the underlying game engine as well as much of the agent behavior. I really, really liked working on them, and I developed a couple of my own methods for making decisions and pathfinding that I'd like to share with, well, you. It'll also be a good way for me to get some feedback on those ideas in a way that I usually wouldn't.

That's essentially what you can expect from this blog: some well-thought-out articles and analyses of artificial intelligence coding practices. As far as scheduling goes, starting next week you can check back every Friday for a brand new blog post. I'm going to first talk about this agent movement article that I mentioned earlier, so take a look at it before hand so you can, you know... actually learn something.