Tuesday, April 28, 2009

Input Wrangling Part 2

So in the previous post we defined a Control which has a name and one or more inputs it has to poll to know it's status. The messy part here is that while Microsoft has made it easy to poll a single input to know it's status, such as a game pads left shoulder button using GamePad.GetState(players[1]).Buttons.LeftShoulder, setting it up so that which input to be polled is dynamically configured is, if not complex, rather wordy. The reason is that each controll has to be able to potentially poll each and every possibly configured input, and there are over 25 of those, not including each keyboard key, so you end up with a long switch statement or if/elseif chain that looks like this

 foreach (InputSettings.Trigger trigger in triggerList)
{
    bool status;
    switch(trigger.inputType)
    {
            ...
        case "LeftShoulder":
            status = pgState.Buttons.LeftShoulder == ButtonState.Pressed;
            break;
        case "RightShoulder":
            status = pgState.Buttons.LeftShoulder == ButtonState.Pressed;
            break;
            ...
    }
    // deal with input type status conflicts here
}

As you can see, each Controll has to loop over this massive switch statement for each of its triggers, there must be a more effecient way. What we'd like to do is store the inputs themselves in an array or list so we can to poll them directly without having to sift through a lot of the inputs we don't want first. Unfortunatly you can't store a reference to an input, and Microsoft strongly discourages storing a reference to a status instance, stating its important to get a new instance everytime you want to poll an input. What we can do is store the code that does the polling itself independantly.

One way to do this is to create small classes that hold one input poll each, such as:

    public abstract class InputPoll
    {
        public abstract float poll(GamePadState gpState);
    }
    public class LeftShoulder : InputPoll
    {
        public override float poll(GamePadState pgState)
        {
            return pgState.Buttons.LeftShoulder == ButtonState.Pressed ? 1.0f : 0.0f;
        }
    }
    public class RightShoulder : InputPoll
    {
        public override float poll(GamePadState pgState)
        {
            return pgState.Buttons.RightShoulder == ButtonState.Pressed ? 1.0f : 0.0f;
        }
    }

Then you have code that creates a list of these input triggers for each controller at instantiation time:

    List<InputPoll> inputTriggers = new List<InputPoll>();
    foreach (InputSettings.Trigger trigger in triggerList)
    {
        switch(trigger.Type)
        {
            case "LeftShoulder":
                inputTriggers.add(new RightShoulder());
                break;
            case "RightShoulder":
                inputTriggers.add(new LeftShoulder());
                break;
        }
    }

Then on each input the Controll will only have to go through its list of trigger inputs to determine its status:

    foreach(InputPoll inputTrigger in inputTriggers)
    {
        status = inputTrigger.poll(pgStatus);
        // deal with input type status conflicts here
    }

What this has really occomplished is moving the big ugly switch statement from the update code to the initialization code, which is a good thing. We can, however, do a little better.

Next:  The joys and dangers of C# delegates

Wednesday, April 15, 2009

Input Wrangling Part 1

Input is one, if not the most critical parts of a game, bad controls can ruin an otherwise great game. Fortunately XNA makes getting at game controller states very easy with statically available objects such as GamePad, Keyboard and Mouse providing state information. What XNA does not provide is a way to map this raw hardware state information into some sort of control scheme. I know that being able to remap input configuration is not something console gamers are used to, or would use if it was available. But I know I'm going to be playing with the control setup throughout development and I would love avoid having to go to each place a control is referenced in the code to make changes each and every time.

Apparently the developers of Spacewar felt the same way as the project includes some control mapping. But it always felt somewhat wrong to me, more inconvenient than it should. It didn't really strike me what was wrong with Spacewars control mapping implementation until I starting thinking about how I would change it. The fact is, it seems to be written backwards. To understand what I mean by this let's look at a little design philosophy.

Every software program is divided into sections to various degrees. In games you typically have the graphics engine, sound, game logic, AI, UI, data access etc. etc. and each of these sections relate to one another in certain ways. One way to look at these interaction is to assign rolls to the parts of the program which define how they relate. One common example are the Service/Client rolls. For example the data access code for a game (the content pipeline) provides a Service to the other parts of the game who are it's Clients. The graphics engine Client can make a request of the content pipeline Service for a given graphic image and the Service will provide it.

Thinking about it this way can help you design your systems to be most functional. One important fact about the Client/Service relationship to keep in mind is that the Client always defines the interactions, the Client has needs and the Service is to provide for those needs as conveniently as possible. If the Service starts imposing rules on the Client the behavior of the Client (in this case the game) will suffer.

While Microsoft's input state objects are easy to use they are not necessarily convenient when you start adding multiple players and different control types, and because they are properties up to 3 deep (e.g. gpSate.ThumbSticks.Left.Y) they do not lend themselves well to abstraction. Any input service should hide this inconvenience. Spacewars implementation does nothing for this, in fact it imposes it's own almost as restrictive format (XInputHelper.GamePads[player].ThumbStickLeftY). The X which allows player indexing and keyboard key to GamePad butting one-to-one mapping but little else. In short the XInputHelper and GamePadHelper do a fair amount of work to provide relativly little benefit to it's client, the game.

My goal here is to walk us through designing an Input Service called the InputWrangler that provied more benifites to the game with less work for us. Follow me now as we take a mental walk through an informal design session...

Problem: My game has 4 thrusters that need to be turned off and on via some user input, input over a network, or possibly by an AI bot. Multiple inputs should be able to trigger a given thruster event (Dpad buttons, mouse control, keyboard keys, etc. etc.). There will be up to 4 different players. The game code should be able to do something as simple as Inputs.Player1.ThrusterUp and not care how that value got set.

Lets say these thrusters are controls, the game can have as many controls as it needs like Thrusters, Guns, Jump, Menu etc. etc. Lets call the input devices triggers, so the GamePad Left Shoulder button is a trigger just as the Enter key on the keyboard is a trigger and we can make an AI hook into the input service another kind of trigger.

Solution:  The client of the service (the game),  has Controls which need to have multiple Triggers.

What the Game needs to know:
  • What the value of a given control is

What the Game does not need to know:
  • What trigger(s) are effecting a given control

What the InputWrangler needs to know:
  • What controls there are
  • What triggers are available
  • Which triggers will affect which controls

What the InputWrangler does not need to know
  • How the controllers are used

It's okay for the game to intrinsically know what  the controls are, they are part of it's basic structure. But it is not okay for the InputWrangler to have built in controls because different parts of the game have different control needs and they shouldn't have to deal with each others inputs. For example the Menu screen needs a Select control but doesn't need the Thruster controls from the GamePlay screen.

Lets look at an example of what my current game play needs, in it's simplest state with only two controls

Game Play Controllers & Triggers

ThrusterUp
  • Trigger Gamepad DPadUp
  • Trigger Gamepad  ThumbStickLeftY
  • Trigger Keyboard UpArrow
 ThrusterDown
  • Trigger Gamepad DpadDown
  • Trigger Gamepad ThumbStickLeftY
  • Trigger Keyboard DownArrow

We need to get this information into the InputWrangler, and we need to do it in such a way that different screens for the game can define different control setups. This looks like a good job for configuration XML files. Here's what I would like the control file for the above configuration to look like:

<InputSettings>
  <Inputs>
    <Players>
      <Player>
        <Controls>
          <Control name="ThrustUp">
            <Triggers>
              <Trigger controller="GamePad" value="DPadUp"/>
              <Trigger controller="GamePad" value="ThumbSticksLeftY"/>
              <Trigger controller="Key" value="Up"/>
            </Triggers>
          </Control>
          <Control name="ThrustDown">
            <Triggers>
              <Trigger controller="GamePad" value="DPadDown"/>
              <Trigger controller="GamePad" value="ThumbSticksLeftY"/>
              <Trigger controller="Key" value="Down"/>
            </Triggers>
          </Control>
        </Controls>
      </Player>
    </Players>
  </Inputs>
</InputSettings>

While the simple menu screen configuration would look like this (you can only hit Play right now):

<InputSettings>
  <Inputs>
    <Players>
      <Player>
        <Controls>
          <Control name="Play">
            <Triggers>
              <Trigger controller="GamePad" value="A"/>
              <Trigger controller="GamePad" value="X"/>
              <Trigger controller="Key" value="Enter"/>
              <Trigger controller="Key" value="Space"/>
              <Trigger controller="Mouse" value="LeftButton"/>
            </Triggers>
          </Control>
        </Controls>
      </Player>
    </Players>
  </Inputs>
</InputSettings>

Next: InputWrangler implementation, the joys and dangers of C# delegates.

Wednesday, April 8, 2009

Ah, Physics!

The goal this time is to simply hook in the JigLibX physics engine, add some gravity and have the ship fall. Then put in a height map, probably the exact one from the JigLibX demo program and have the ship land on that.

Hooking in the JigLibX engine seems pretty simple, you initialize a new base physics object and set some configuration settings like gravity:

            physicSystem = new PhysicsSystem();
            physicSystem.CollisionSystem = new CollisionSystemSAP();
            physicSystem.EnableFreezing = true;
            physicSystem.SolverType = PhysicsSystem.Solver.Normal;
            physicSystem.CollisionSystem.UseSweepTests = true;
            physicSystem.NumCollisionIterations = 5;
            physicSystem.NumContactIterations = 15;
            physicSystem.NumPenetrationRelaxtionTimesteps = 20;
            physicSystem.Gravity = new Vector3(0f, -5f, 0f);

Every object you want to have a physical presence in the world then needs to have an instance of a CollisionSkin which defines how the object insteracts with others in the physical world, and if it moves (as apposed to being immobile like a heightmap) the CollisionSkin needs an instance of a Body object. Then you can add the object to the physics engine using a static method. To create a completely default object in the world you would do something like

Body body = new Body(); // just a dummy. The PhysicObject uses its position to get the draw pos
CollisionSkin  collision = new CollisionSkin(body);
PhysicsSystem.CurrentPhysicsSystem.CollisionSystem.AddCollisionSkin(collision);

The use of the static method worries me, I may want more than one physics system running at  a time for some effects and this could complicate that. For now it's works fine. I added the physics initialization code to the constructor of my PlayScreen, no need to do it at the Game level since not all screens will need physics, and created a PhysicalSceneItem object which creates the Body and CollisionSkin and automatically adds itself to the physics system. The PlayerShipSceneItem is now a child of that base class and uses the PlayerShipMeshShape to adjust the Collision item to match the shape and size of my ship (a simple rectangle for now) and sets some sensible Mass and Material properties such as surface friction, roughness and elasticity.

An important thing to remember is that on every Update call to the PlayScreen the physics engine needs to be updated and then the PhysicalSceneItem in it's Update call needs to adjust it Shapes translation and rotation to match that of it's physical Body instance. Otherwise nothing will ever visually happen. This can be tricky because the Shapes location information is stored in a matrix so if we had to adjust it we'd have to take it apart into translation, rotation and scale settings using Matrix.Decompose, make the adjustments and then put it back together. Luckily we don't have to do that now, though I'll have to do it later, because we don't care where the Shape was, we just want it to be where the Body says it is now. The body uses matrix's as well but keeps them separated into translation, scale and orientation, so we build the Shapes new matrix by multiplying them together into one:

                shape.World = Matrix.CreateTranslation(-center) *
                              Matrix.CreateScale(scale) *
                              body.Orientation *
                              Matrix.CreateTranslation(body.Position + center);

The 'center' Vector3D lets an object have a center point around which it rotates that is not the physical center of it's shape, I'm not using it right now and I'm not sure how it will behave with the physics engine... I might just pull it out.

So all put together, compiled and run and the ship obediently falls away from the camera! Yay! Now for the height map.

I pretty much bring the height map code directly from the JigLibX demo program. This setup uses a custom content pipeline to read in a 257x257 sized gray scale bitmap which it translates into a 3D Model and an array of heights which are stored in the Content directory. At run time the game loads the Model and the heights and creates a native JigLibX Heightmap instance. One potential problem I see immediately is that the model generated is over 4 megabytes in size, given how big I want levels to be and how many levels I want and that I want to come in under the total 50 megabytes size for the entire game, this is going to have to change. I'll have to store the bitmaps, the sample is 42KB stored as a .png, and create the Models at level load time. For now I'll leave that on the TODO list, I just want to see it in my game.

Hooking it up was once again rather simple, I made a HeightmapMeshShape and a HeightmapSceneItem derived from the PhysicalSceneItem, initialized them in the PlayScreen constructor, fired up the game... and thats when things went sideways.

Literally sideways. I was expecting to be looking down on the heightmap with the ship falling away from the camera and landing in the center of the map. But instead I got a nice, if unexpected, side view of the heightmap. Apparently 'which way is up' isn't the same for my fledgling game and the native JigLibX heightmap object. JigLibX itself is nutral as far as subjective directions go, as a good physics engine must be (there is no 'up' in space) but the heightmap object has a very definitive idea, 'up' is along the positive Y axis while I've been setting my game up to use positive Z as 'up'. Also, the JigLibX Heighmap ignores it's rotation matrix. Hrm, to change my game or the JigLibX Heightmap?

I spend a couple of hours trying to change the Heightmap. The pipeline wasn't too hard, I got the mesh rotated and showing properly and I think I got the colission part of the Heightmap playing along. I'm now looking down on it in the game and the ship falls into it and hits it. But the ships resulting movement based on the hit is all wonky. It goes careening off in odd directions and eventually falls through the world.

I give up. My game is in it's infancy and changing it will be much simpler than mucking about in the depths of the Heighmap physics object. It'll take me some time to get used to Y being up and X, Z being map coordinates, but I'll cope.

Once I've undone my changes, moved and re-oriented the camera and my ship falls and bounces off the ground. Yay!

One lingering thing, I went into 3DS Max and realigned my ship model to the new Y-is-Up thinking and re-exported it, replaced the one in my Content directory and re-built the project. But the ship was still oriented wrong. I had made some other changes to it, and those showed up so I know the new model was imported. I've tried several other things, and still the ship is oriented wrong. I suspect the Max fbx exporter is 'helping' me by re-orienting the model the way it thinks I want it, knowing that Y and Z axis flipping can be an issue. I'll have to figure out how to stop it from doing that.

Next: Player Input Wrangling

Wednesday, April 1, 2009

Finding A Way To Make Things Fall Down

I'm all for leveraging any engines or libraries that are available to simplicity a task. The only good reason to re-invent the wheel is if you have very specific needs that aren't met well by whats available, or if you need a high degree of optimization that isn't provided by a general case system, or if you have the desire and the available time to learn to do it yourself. For this game I want realistic feeling physics, I want the players steam powered hover ship to move and react in realistic ways (that sounds contradictory doesn't it?), I want to be able to knock things over and have things explode in satisfying ways, and all that means a decent and fairly robust physics engine. That's not something I have the time to really build myself, and I'm sure there are a number 0f free or reasonably cheap ones available in th community for C# by now.

My requirements are:
Available source (I'm going to want changes and special cases)
3D focused, 2D will not work for this
Rectangle and Sphere primitives at minimum
Mesh collision detection
Joints: spring joints and joint limiters a bonus
At least minimal surface friction

Googling around I've found there really isn't that much in the way of physics engines for C# yet. A number of 2D libraries seem fairly well along but the 3D realm is pretty limited. Fortunately the JigLibX library, a C# translation of the C JigLib appears to fit my bill almost perfectly. There doesn't appear to be native support for Spring constraints however the Car object in the demo is doing pretty much exactly what I need and a look through the sample applications code is encouraging. I'll import this into my project and see how it goes.

Now to get something on the screen. I've built and textured a simple sample ship (really a fancy cube) in an older version of 3DS Max, exported it as .fbx, copied it and it's texture over to my Content folder and imported the model into my project. Now I need to make a class that will represent this shape.

Hmmm. There appears to be two objects in the Spacewar game that represent something in the game and hold position information. There are SceneItems which can be added to, and are managed by, Screen instances and there are Shapes which are given to SceneItems. I'm not entirely sure what this duplicity is for, why you would create a SceneItem, then create a Shape, then give the Shape to the SceneItem and the SceneItem to the Screen instance. Apon inspection it seems the SceneItem derived classes are responsible for the behavior of game objects, how they move and how they collide with one another, while the Shape derived classes describe what gets drawn on screen. This division would allow you to have game objects that look different but behave the same, such as multiple types of asteroids. The problem I see with this is that most objects in a game that look different also behave at least slightly different. The collision bounds for asteroids should be different based on the shape of the asteroid for example, and different sized asteroids would react differently to a collision. I can see this leading to a situation where you have an AlienSceneItem class that has huge switch statements (or if-else chains) that change the behaviors based on what Shape it's loaded with, creating a Gordian Knot of logic that would be far better served by breaking it out into separate classes. I'll leave it this way now, but if I end up with just about a one to one ratio of SceneItems and Shapes I'll collapse them into one base class.

So I've made my PlayerShipModelShape and my PlayerShipSceneItem, I initialize them at the start of my PlayerScreen and put the Shape in the SceneItem and the SceneItem in the Screen so everything is getting Update and Draw calls as should be. Now I just want to see it on the screen. Alas, the shader is unhappy. The shader for Spacewars is more advanced then I need it to be right now, it's got 14 parameters and while most of them are pretty obvious (world, worldViewProjection, viewPosition, etc. etc.) I don't really want to spend the time to load up all it's textures and set it's params just right when I'm going to have to write my own shader later anyway. I'm not ready for that now, certainly.

I rip out the Spacewar shader and all of it's params and copy some BasicShader using code from somewhere and low and behold nothing shows. Double check the camera location and the ship location... ah, the default Spacewar camera is at z = 500 units, but my ship is 1.5 units across, my ship is there, it's just smaller than a pixel right now. Everything in the Spacewar universe must be much bigger than my ship. I move the camera to 5 units and there's my ship, nice and textured and everything.

Next: Fun With Physics, and Which Way is Up???