Wednesday, December 5, 2012

Introducing Mojo

Introducing Mojo
During my last couple of months at Rocket Ninja (AKA ThridMotion) I did a lot of work in Node.js. My time with it brought me to one inevitable conclusion: Node.js is a powerful tool, but JavaScript is NOT a good server-side scripting language. The trouble is that JavaScript is single threaded and servers necessarily need to deal with many potentially blocking external resources, the result is a HUGE reliance on callback methods. In many Node.js apps I've seen callbacks nest five deep, callbacks pass callbacks they've been passed to other callbacks, callbacks are wrangled by special libraries just for that purpose, and so on. This makes code that is hard to debug, difficult to maintain, impossible to refactor and challenging to train people on. So JavaScript's single threaded nature forces Node.js code into complex design patterns that turn any medium to large sized project into a tangle of callbacks. It would be nice if Node would implement a language that handles potential blocking better, with multiple threads or processes.

Thus charmed by the idea of Node.js but frustrated by JavaScript I set out to find something better. There isn't really anything. Erlang promises to do much the same thing using an actor style multi-threaded environment but the languages syntax is so far removed from typical C and Pascal based languages that getting good with it would require time that's hard to justify. Other common scripting languages, like Python and PHP, are also single threaded and would suffer the same problem as JavaScript.

So, suddenly having more free time than usual, I dusted off an old scripting language I started working on a decade ago and determined to create my own ideal server side scripting language using a multiple process model of my own design. The end goal is to replace the V8 engine in Node.js, though that is a long, long way away.

The language is called Mojo, and while the prototype nears being functionally complete I've decided to post some information and solicit feedback on the design. This is not nearly all there is to it, this is just a brief overview of the process model and syntax. I'll fill in details as I go.



Mojo is a versatile, objective multi-process scripting language intended to be instantly familiar to anyone who's used JavaScript, Java, C++ or C# before. Its loosely actor based process model was designed to be easy and clean for managing any number of processes without the need for complex locking or synchronizing mechanisms.

All code in Mojo is in classes. When the VM starts an internal process class is instantiated which runs the start up code or any code passed in to be executed, this instance is called the “root” process instance. If your program simply consists of

printf(“Hello World”);

Then that code is executed as part of the constructor of the root instance.

To create a new process in Mojo you create a class to manage it. A process class is defined just as you would define any other class, it's a best practice to name the class “Process<name>”.

class ProcessA{
var count = 0;
method initialize(){
// do something
}

method doAddCount(amount){
count += amount;
}

method getCount(){
return count;
}
}

Constructor methods in Mojo are named “initialize”, you can use them in anonymous classes as well.

You start a new process as you would create a new instance but by using “spawn” instead of “new”

var testProc = spawn ProcessA();

This creates a new class instance and a new process to run it. It's a best practice to name a class instance that is a process with “<name>Proc”, it is referred to as a process instance, or just a process. The initialize method in the instance will execute when the process is created, but the process will not terminate when it runs out of  code to run, it will hang around waiting for more things to do. A class instance that creates a process is considered that processes owner, only an owner or the process itself can kill a process. It's owner can kill testProc with

testProc.exit();

To get testProc to do something you call a method on it

testProc.doAddCount(5);

This will cause the testProc process to execute the code in doAddCount(). The calling process does not execute the code owned by some other process. This brings up the somewhat complex issue of code ownership. As I mentioned before all code is part of a class, when an instance of a class is created that instance (and that code) is owned by the instance of the class that called "new" to create it. That owning class instance is itself owned by the instance of the class that created it, and so on up an ownership chain. Somewhere up that chain will be an instance that is also a process, even if it's all the way up to the root instance, and that process will run all the code in that ownership chain.

In the case where a new class instance is also a new process (via “spawn”), the instance is owned by it's creating codes class instance but it runs its own code and it owns and runs the code of instances it creates.

Using the above syntax of testProc.doAddCount(5) the calling process will block waiting for the  testProc process to finish running doAddCount(). If  testProc is a busy process this could take some time, which is obviously not optimal. Why have multiple processes if they have to wait for one another? There is another way to call a method in Mojo

testProc:doAddCount(5);

This is a non-blocking method call, the calling process continues executing while doAddCount() is pushed onto the testProc process's execution stack. If doAddCount(), were to try and return a value it would be lost.

Lets look at a more involved example:

class RequestHandler(){

var pool = [];

class DoStuffProcess(){
method doIt(stuff){
stuff = stuff + “ done.”;
this.owner:doneStuff(stuff, this);
}

} // class DoStuffProcess

method handleRequest(request){
var handlerProc = null;
if (pool.length > 0){
handlerProc = pool.pop();
} else {
handlerProc = spawn DoStuffProcess();
}

handlerProc:doIt(request);
}

method doneStuff(stuff, workerProc){
pool.push(workerProc);
printf(stuff);
}

} // class RequestHandler

var handler = new RequestHandler();
handler.handleRequest(“Test”);
handler.handleRequest(“Test1”);
handler.handleRequest(“Test2”);
handler.handleRequest(“Test3”);

In this example a RequestHandler instance will maintain a pool of  DoStuffProcess processes creating new ones as needed. Notice that we do not need any special locking or synchronizing around the pop() and push() calls, this is because the process that owns the RequestHandler instance (in this case the root process) is the only process that will ever execute handleRequest() and  doneStuff() so we need not worry about things like two process's trying to pop() the same single worker from the pool at the exact same time or two processes trying to push() new entries into the pool at the exact same time. Also notice how the RequestHandler does not need to keep track of the new instances of  DoStuffProcess it creates because the process's themselves will call doneStuff() to get added to the pool when they are ready for more work. On top of that, if something goes wrong and the worker process dies handler doesn't need to do anything about it.

We can limit pool growth by modifying doneStuff() like this

method doneStuff(stuff, workerProc){
if (pool.length < 101){
pool.add(workerProc);
} else {
workerProc.exit();
}
printf(stuff);
}

We must explicitly exit the processes or they will hang around forever waiting for something to call one of their methods. (or maybe the garbage collector will terminate processes that are no longer referenced anywhere, I'm not sure yet).

So that's a brief overview of the process model of Mojo, please ask any questions you have or point out any absurdities I'm instigating.

Thanks!

Wednesday, June 10, 2009

It's been a little while since my last update, I got distracted by a different XNA project that I haven't yet documented. Im going to finish up the InputWrangler discussion, then move on to things from this new project, pretty heavily involved with HLSL.

So according to the previous post we have defined a Control which has a name and one or more inputs it has to poll to know it's status. The messy part here is that while Microsoft has made it easy to poll a single input to know it's status, such as a game pads left shoulder button using GamePad.GetState(players[1]).Buttons.LeftShoulder() this doesn't translate well to polling a control state dynamically. The method required is very specific, but we need a way to tell a controller to pole this input specifically without wanting to go through a huge switch statement of all possible inputs every time to do it. We need a dynamic way to assign the polling method to something.

One basic way we can do this is to create an Interface which poles an input, then create an implementing Class for each input which we want to make available.

public interface InputTrigger
{
public static float getInput(GamePadState gpState);
}
public class LeftShoulderButton : InputTrigger
{
public static float getInput(GamePadState gpState)
{
return gpState.Buttons.LeftShoulder();
}
}

Than the control would only need a List<InputTrigger> initialized at startup with the right InputTrigger implementing class instances, and it could test all of its triggers by walking the list:

List<InputTrigger> inputTriggers = new List<InputTrigger>();
public void InitInput()
{
inputTriggers.Add(new LeftShoulderButton());
}
public float Pole(){
float value = 0.0f;
foreach (InputTrigger it in inputTriggers)
{
float input = it.getInput();
if (value == 0.0f)
value = input;
}
return value;
}

This works just fine, the control only has to pole the input triggers it's interested in without having to walk an entire list of all possible inputs to get the ones it wants. But it's still inefficient, we have to create a class for each input, and a class instance for each input we wish to pole. There must be a way to assign the getInput() method directly to something without needing the class surrounding it. And there is!

Most modern languages support a programming pattern called Anonymous Functions (not Methods, because Methods are a part of a class which these are not). In short Anonymous Functions allow a program to assign code to a variable dynamically at run time. In pseudo code an anonymous function would look something like this adsf

var aMethod = { return x + x; }
print aMethod( 5 );
aMethod = { return x * x; }
print aMethod( 5 );

The results of running this code would unsurprisingly be:sdf

5
25

But for strongly typed languages such as Java and C# this simple syntax wont work. Among other problems, how is the return type defined, and what defines the parameters? In version 3 of C# the languages developers solved these problems by introducing delegates. To use delegates and assign them to variables you must first define a delegate type:

delegate float TriggerType(GamePadState gpState);

This solves the problem of how to define the return type and parameters for our Anonymous Function. Once you have a type you can create variables that will hold Anonymous Functions which conform to that type:

private TriggerType triggers;

And assign code to them:

triggers = delegate(GamePadState gpState)
{
if (gpState == null || !gpState.IsConnected) return -1.0f;
if (gpState.Buttons.LeftShoulder == ButtonState.Pressed)
{
return -1.0f;
}
};

Now the control can test it's trigger by simply calling trigger(gamePadState) and it doesn't have to know anything at all about what input is being poled. But what if we have more than one trigger, do we still need to create of a List of trigger delegates? No, that's one of the cooler things (and more dangerous things) about triggers, they stack. You can add another trigger to be eveluated after the first one like this:

triggers += delegate(GamePadState gpState)
{
if (gpState == null || !gpState.IsConnected) return -1.0f;
if (gpState.Buttons.RightShoulder == ButtonState.Pressed)
{
return -1.0f;
}
};

Now both LeftShould and RightShoulder will be poled on one call to triggers(gamePadState). I call this dangerous because it's an example of hidden functionality, By simply looking at the trigger variable there is no way of knowing how many delegates will be assigned to it, or what all of them might do, this can cause unexpected behaviors that would be a pain to debug. Use with caution. There is a problem here, we will only ever see the last value returned, the test for RightShoulder, LeftShoulder results are forever hidden (another example of hidden functionality and the problems it can cause). To solve this we're going to have to pass our return value along to any fallowing triggers so that they can decide if their results are more important than results already rendered. For the purpose of this example we'll make it so that if the previous results are > 0 we'll leave them alone, but if they are 0 we'll overwrite it with our results.

The problem is there is no way I know of that a delegate can see the return value of a delegate that executed before it. So we're going to have to use a reference parameter to pass the return value. The code changes like this:

delegate void TriggerType(GamePadState gpState, ref float value);

triggers = delegate(GamePadState gpState, ref float value)
{
if (value > 0 || gpState == null || !gpState.IsConnected) return;
if (gpState.Buttons.LeftShoulder == ButtonState.Pressed)
{
value = 1;
}
};
triggers += delegate(GamePadState gpState, ref float value)
{
if (value > 0 || gpState == null || !gpState.IsConnected) return;
if (gpState.Buttons.RightShoulder == ButtonState.Pressed)
{
value = 1;
}
};


Notice the use of += on the second assignment, the new delegate is added to triggers without removing the last one. Note that C# does not guarantee in which order delegates chained like this will be executed. Now we pole our triggers by passing in a variable that will be set to the resulting value: triggers(gamePadState, ref ourValue). We'll get the value of the first pole that returns something other than 0. We only have '1' as a possible return value in these sample, but analog stick triggers can return float values between 0.0 and 1.0 and mouse triggers can return any value from 0 to the height and width of your current screen.

There's one last special case we need to deal with. Even Microsoft wasn't big on the idea of creating a poling method for each and every possible key hit on all possible international keyboards, so the poling method for them accepts a string input of what you want to pole. You can pole a keyboard key like this: keyState.IsKeyDown(“Down”). This is great, we only need one delegate to handle all possible keyboard keys and we just pass it in the Trigger value from our configuration file. We create a method to do this for us:

static TriggerType BuildKeyTriggerDelegate(Keys keyIn)
{
return delegate(KeyboardState keyState, ref float value)
{
if (keyState == null) return;
if (keyState.IsKeyDown(keyIn))
{
value = 1;
}
};
}

This exposes something particularly tricky about delegates. Notice how the keyIn parameter is never passed directly to the delegate, but the delegate uses it via the context of the BuildKeyTriggerDelegate method that creates it, and that context survives the life of the delegate! You might think that because the BuildKeyTriggerDelegate method is static multiple key trigger delegates might share the same context and collide with one another, but such is not the case, each call to BuildKeyTriggerDelegate creates it's own unique context for the delegate it builds.

There, now our initialization code can read the configuration file and build controllers with delegate trigger chains that execute quickly and efficiently. We essentially moved the huge switch statement from the runtime code to the initialization code, which is much better.

All that being said, while doing a little research for this post I found that behind the scenes the C# compiler is very likely creating wrapper classes around all our delegates anyway, so there is probably no runtime advantage between our version that created class instances for each trigger and the one with delegates. Oh well.

Tuesday, April 28, 2009

Input Wrangling Part 2

So in the previous post we defined a Control which has a name and one or more inputs it has to poll to know it's status. The messy part here is that while Microsoft has made it easy to poll a single input to know it's status, such as a game pads left shoulder button using GamePad.GetState(players[1]).Buttons.LeftShoulder, setting it up so that which input to be polled is dynamically configured is, if not complex, rather wordy. The reason is that each controll has to be able to potentially poll each and every possibly configured input, and there are over 25 of those, not including each keyboard key, so you end up with a long switch statement or if/elseif chain that looks like this

 foreach (InputSettings.Trigger trigger in triggerList)
{
    bool status;
    switch(trigger.inputType)
    {
            ...
        case "LeftShoulder":
            status = pgState.Buttons.LeftShoulder == ButtonState.Pressed;
            break;
        case "RightShoulder":
            status = pgState.Buttons.LeftShoulder == ButtonState.Pressed;
            break;
            ...
    }
    // deal with input type status conflicts here
}

As you can see, each Controll has to loop over this massive switch statement for each of its triggers, there must be a more effecient way. What we'd like to do is store the inputs themselves in an array or list so we can to poll them directly without having to sift through a lot of the inputs we don't want first. Unfortunatly you can't store a reference to an input, and Microsoft strongly discourages storing a reference to a status instance, stating its important to get a new instance everytime you want to poll an input. What we can do is store the code that does the polling itself independantly.

One way to do this is to create small classes that hold one input poll each, such as:

    public abstract class InputPoll
    {
        public abstract float poll(GamePadState gpState);
    }
    public class LeftShoulder : InputPoll
    {
        public override float poll(GamePadState pgState)
        {
            return pgState.Buttons.LeftShoulder == ButtonState.Pressed ? 1.0f : 0.0f;
        }
    }
    public class RightShoulder : InputPoll
    {
        public override float poll(GamePadState pgState)
        {
            return pgState.Buttons.RightShoulder == ButtonState.Pressed ? 1.0f : 0.0f;
        }
    }

Then you have code that creates a list of these input triggers for each controller at instantiation time:

    List<InputPoll> inputTriggers = new List<InputPoll>();
    foreach (InputSettings.Trigger trigger in triggerList)
    {
        switch(trigger.Type)
        {
            case "LeftShoulder":
                inputTriggers.add(new RightShoulder());
                break;
            case "RightShoulder":
                inputTriggers.add(new LeftShoulder());
                break;
        }
    }

Then on each input the Controll will only have to go through its list of trigger inputs to determine its status:

    foreach(InputPoll inputTrigger in inputTriggers)
    {
        status = inputTrigger.poll(pgStatus);
        // deal with input type status conflicts here
    }

What this has really occomplished is moving the big ugly switch statement from the update code to the initialization code, which is a good thing. We can, however, do a little better.

Next:  The joys and dangers of C# delegates

Wednesday, April 15, 2009

Input Wrangling Part 1

Input is one, if not the most critical parts of a game, bad controls can ruin an otherwise great game. Fortunately XNA makes getting at game controller states very easy with statically available objects such as GamePad, Keyboard and Mouse providing state information. What XNA does not provide is a way to map this raw hardware state information into some sort of control scheme. I know that being able to remap input configuration is not something console gamers are used to, or would use if it was available. But I know I'm going to be playing with the control setup throughout development and I would love avoid having to go to each place a control is referenced in the code to make changes each and every time.

Apparently the developers of Spacewar felt the same way as the project includes some control mapping. But it always felt somewhat wrong to me, more inconvenient than it should. It didn't really strike me what was wrong with Spacewars control mapping implementation until I starting thinking about how I would change it. The fact is, it seems to be written backwards. To understand what I mean by this let's look at a little design philosophy.

Every software program is divided into sections to various degrees. In games you typically have the graphics engine, sound, game logic, AI, UI, data access etc. etc. and each of these sections relate to one another in certain ways. One way to look at these interaction is to assign rolls to the parts of the program which define how they relate. One common example are the Service/Client rolls. For example the data access code for a game (the content pipeline) provides a Service to the other parts of the game who are it's Clients. The graphics engine Client can make a request of the content pipeline Service for a given graphic image and the Service will provide it.

Thinking about it this way can help you design your systems to be most functional. One important fact about the Client/Service relationship to keep in mind is that the Client always defines the interactions, the Client has needs and the Service is to provide for those needs as conveniently as possible. If the Service starts imposing rules on the Client the behavior of the Client (in this case the game) will suffer.

While Microsoft's input state objects are easy to use they are not necessarily convenient when you start adding multiple players and different control types, and because they are properties up to 3 deep (e.g. gpSate.ThumbSticks.Left.Y) they do not lend themselves well to abstraction. Any input service should hide this inconvenience. Spacewars implementation does nothing for this, in fact it imposes it's own almost as restrictive format (XInputHelper.GamePads[player].ThumbStickLeftY). The X which allows player indexing and keyboard key to GamePad butting one-to-one mapping but little else. In short the XInputHelper and GamePadHelper do a fair amount of work to provide relativly little benefit to it's client, the game.

My goal here is to walk us through designing an Input Service called the InputWrangler that provied more benifites to the game with less work for us. Follow me now as we take a mental walk through an informal design session...

Problem: My game has 4 thrusters that need to be turned off and on via some user input, input over a network, or possibly by an AI bot. Multiple inputs should be able to trigger a given thruster event (Dpad buttons, mouse control, keyboard keys, etc. etc.). There will be up to 4 different players. The game code should be able to do something as simple as Inputs.Player1.ThrusterUp and not care how that value got set.

Lets say these thrusters are controls, the game can have as many controls as it needs like Thrusters, Guns, Jump, Menu etc. etc. Lets call the input devices triggers, so the GamePad Left Shoulder button is a trigger just as the Enter key on the keyboard is a trigger and we can make an AI hook into the input service another kind of trigger.

Solution:  The client of the service (the game),  has Controls which need to have multiple Triggers.

What the Game needs to know:
  • What the value of a given control is

What the Game does not need to know:
  • What trigger(s) are effecting a given control

What the InputWrangler needs to know:
  • What controls there are
  • What triggers are available
  • Which triggers will affect which controls

What the InputWrangler does not need to know
  • How the controllers are used

It's okay for the game to intrinsically know what  the controls are, they are part of it's basic structure. But it is not okay for the InputWrangler to have built in controls because different parts of the game have different control needs and they shouldn't have to deal with each others inputs. For example the Menu screen needs a Select control but doesn't need the Thruster controls from the GamePlay screen.

Lets look at an example of what my current game play needs, in it's simplest state with only two controls

Game Play Controllers & Triggers

ThrusterUp
  • Trigger Gamepad DPadUp
  • Trigger Gamepad  ThumbStickLeftY
  • Trigger Keyboard UpArrow
 ThrusterDown
  • Trigger Gamepad DpadDown
  • Trigger Gamepad ThumbStickLeftY
  • Trigger Keyboard DownArrow

We need to get this information into the InputWrangler, and we need to do it in such a way that different screens for the game can define different control setups. This looks like a good job for configuration XML files. Here's what I would like the control file for the above configuration to look like:

<InputSettings>
  <Inputs>
    <Players>
      <Player>
        <Controls>
          <Control name="ThrustUp">
            <Triggers>
              <Trigger controller="GamePad" value="DPadUp"/>
              <Trigger controller="GamePad" value="ThumbSticksLeftY"/>
              <Trigger controller="Key" value="Up"/>
            </Triggers>
          </Control>
          <Control name="ThrustDown">
            <Triggers>
              <Trigger controller="GamePad" value="DPadDown"/>
              <Trigger controller="GamePad" value="ThumbSticksLeftY"/>
              <Trigger controller="Key" value="Down"/>
            </Triggers>
          </Control>
        </Controls>
      </Player>
    </Players>
  </Inputs>
</InputSettings>

While the simple menu screen configuration would look like this (you can only hit Play right now):

<InputSettings>
  <Inputs>
    <Players>
      <Player>
        <Controls>
          <Control name="Play">
            <Triggers>
              <Trigger controller="GamePad" value="A"/>
              <Trigger controller="GamePad" value="X"/>
              <Trigger controller="Key" value="Enter"/>
              <Trigger controller="Key" value="Space"/>
              <Trigger controller="Mouse" value="LeftButton"/>
            </Triggers>
          </Control>
        </Controls>
      </Player>
    </Players>
  </Inputs>
</InputSettings>

Next: InputWrangler implementation, the joys and dangers of C# delegates.

Wednesday, April 8, 2009

Ah, Physics!

The goal this time is to simply hook in the JigLibX physics engine, add some gravity and have the ship fall. Then put in a height map, probably the exact one from the JigLibX demo program and have the ship land on that.

Hooking in the JigLibX engine seems pretty simple, you initialize a new base physics object and set some configuration settings like gravity:

            physicSystem = new PhysicsSystem();
            physicSystem.CollisionSystem = new CollisionSystemSAP();
            physicSystem.EnableFreezing = true;
            physicSystem.SolverType = PhysicsSystem.Solver.Normal;
            physicSystem.CollisionSystem.UseSweepTests = true;
            physicSystem.NumCollisionIterations = 5;
            physicSystem.NumContactIterations = 15;
            physicSystem.NumPenetrationRelaxtionTimesteps = 20;
            physicSystem.Gravity = new Vector3(0f, -5f, 0f);

Every object you want to have a physical presence in the world then needs to have an instance of a CollisionSkin which defines how the object insteracts with others in the physical world, and if it moves (as apposed to being immobile like a heightmap) the CollisionSkin needs an instance of a Body object. Then you can add the object to the physics engine using a static method. To create a completely default object in the world you would do something like

Body body = new Body(); // just a dummy. The PhysicObject uses its position to get the draw pos
CollisionSkin  collision = new CollisionSkin(body);
PhysicsSystem.CurrentPhysicsSystem.CollisionSystem.AddCollisionSkin(collision);

The use of the static method worries me, I may want more than one physics system running at  a time for some effects and this could complicate that. For now it's works fine. I added the physics initialization code to the constructor of my PlayScreen, no need to do it at the Game level since not all screens will need physics, and created a PhysicalSceneItem object which creates the Body and CollisionSkin and automatically adds itself to the physics system. The PlayerShipSceneItem is now a child of that base class and uses the PlayerShipMeshShape to adjust the Collision item to match the shape and size of my ship (a simple rectangle for now) and sets some sensible Mass and Material properties such as surface friction, roughness and elasticity.

An important thing to remember is that on every Update call to the PlayScreen the physics engine needs to be updated and then the PhysicalSceneItem in it's Update call needs to adjust it Shapes translation and rotation to match that of it's physical Body instance. Otherwise nothing will ever visually happen. This can be tricky because the Shapes location information is stored in a matrix so if we had to adjust it we'd have to take it apart into translation, rotation and scale settings using Matrix.Decompose, make the adjustments and then put it back together. Luckily we don't have to do that now, though I'll have to do it later, because we don't care where the Shape was, we just want it to be where the Body says it is now. The body uses matrix's as well but keeps them separated into translation, scale and orientation, so we build the Shapes new matrix by multiplying them together into one:

                shape.World = Matrix.CreateTranslation(-center) *
                              Matrix.CreateScale(scale) *
                              body.Orientation *
                              Matrix.CreateTranslation(body.Position + center);

The 'center' Vector3D lets an object have a center point around which it rotates that is not the physical center of it's shape, I'm not using it right now and I'm not sure how it will behave with the physics engine... I might just pull it out.

So all put together, compiled and run and the ship obediently falls away from the camera! Yay! Now for the height map.

I pretty much bring the height map code directly from the JigLibX demo program. This setup uses a custom content pipeline to read in a 257x257 sized gray scale bitmap which it translates into a 3D Model and an array of heights which are stored in the Content directory. At run time the game loads the Model and the heights and creates a native JigLibX Heightmap instance. One potential problem I see immediately is that the model generated is over 4 megabytes in size, given how big I want levels to be and how many levels I want and that I want to come in under the total 50 megabytes size for the entire game, this is going to have to change. I'll have to store the bitmaps, the sample is 42KB stored as a .png, and create the Models at level load time. For now I'll leave that on the TODO list, I just want to see it in my game.

Hooking it up was once again rather simple, I made a HeightmapMeshShape and a HeightmapSceneItem derived from the PhysicalSceneItem, initialized them in the PlayScreen constructor, fired up the game... and thats when things went sideways.

Literally sideways. I was expecting to be looking down on the heightmap with the ship falling away from the camera and landing in the center of the map. But instead I got a nice, if unexpected, side view of the heightmap. Apparently 'which way is up' isn't the same for my fledgling game and the native JigLibX heightmap object. JigLibX itself is nutral as far as subjective directions go, as a good physics engine must be (there is no 'up' in space) but the heightmap object has a very definitive idea, 'up' is along the positive Y axis while I've been setting my game up to use positive Z as 'up'. Also, the JigLibX Heighmap ignores it's rotation matrix. Hrm, to change my game or the JigLibX Heightmap?

I spend a couple of hours trying to change the Heightmap. The pipeline wasn't too hard, I got the mesh rotated and showing properly and I think I got the colission part of the Heightmap playing along. I'm now looking down on it in the game and the ship falls into it and hits it. But the ships resulting movement based on the hit is all wonky. It goes careening off in odd directions and eventually falls through the world.

I give up. My game is in it's infancy and changing it will be much simpler than mucking about in the depths of the Heighmap physics object. It'll take me some time to get used to Y being up and X, Z being map coordinates, but I'll cope.

Once I've undone my changes, moved and re-oriented the camera and my ship falls and bounces off the ground. Yay!

One lingering thing, I went into 3DS Max and realigned my ship model to the new Y-is-Up thinking and re-exported it, replaced the one in my Content directory and re-built the project. But the ship was still oriented wrong. I had made some other changes to it, and those showed up so I know the new model was imported. I've tried several other things, and still the ship is oriented wrong. I suspect the Max fbx exporter is 'helping' me by re-orienting the model the way it thinks I want it, knowing that Y and Z axis flipping can be an issue. I'll have to figure out how to stop it from doing that.

Next: Player Input Wrangling

Wednesday, April 1, 2009

Finding A Way To Make Things Fall Down

I'm all for leveraging any engines or libraries that are available to simplicity a task. The only good reason to re-invent the wheel is if you have very specific needs that aren't met well by whats available, or if you need a high degree of optimization that isn't provided by a general case system, or if you have the desire and the available time to learn to do it yourself. For this game I want realistic feeling physics, I want the players steam powered hover ship to move and react in realistic ways (that sounds contradictory doesn't it?), I want to be able to knock things over and have things explode in satisfying ways, and all that means a decent and fairly robust physics engine. That's not something I have the time to really build myself, and I'm sure there are a number 0f free or reasonably cheap ones available in th community for C# by now.

My requirements are:
Available source (I'm going to want changes and special cases)
3D focused, 2D will not work for this
Rectangle and Sphere primitives at minimum
Mesh collision detection
Joints: spring joints and joint limiters a bonus
At least minimal surface friction

Googling around I've found there really isn't that much in the way of physics engines for C# yet. A number of 2D libraries seem fairly well along but the 3D realm is pretty limited. Fortunately the JigLibX library, a C# translation of the C JigLib appears to fit my bill almost perfectly. There doesn't appear to be native support for Spring constraints however the Car object in the demo is doing pretty much exactly what I need and a look through the sample applications code is encouraging. I'll import this into my project and see how it goes.

Now to get something on the screen. I've built and textured a simple sample ship (really a fancy cube) in an older version of 3DS Max, exported it as .fbx, copied it and it's texture over to my Content folder and imported the model into my project. Now I need to make a class that will represent this shape.

Hmmm. There appears to be two objects in the Spacewar game that represent something in the game and hold position information. There are SceneItems which can be added to, and are managed by, Screen instances and there are Shapes which are given to SceneItems. I'm not entirely sure what this duplicity is for, why you would create a SceneItem, then create a Shape, then give the Shape to the SceneItem and the SceneItem to the Screen instance. Apon inspection it seems the SceneItem derived classes are responsible for the behavior of game objects, how they move and how they collide with one another, while the Shape derived classes describe what gets drawn on screen. This division would allow you to have game objects that look different but behave the same, such as multiple types of asteroids. The problem I see with this is that most objects in a game that look different also behave at least slightly different. The collision bounds for asteroids should be different based on the shape of the asteroid for example, and different sized asteroids would react differently to a collision. I can see this leading to a situation where you have an AlienSceneItem class that has huge switch statements (or if-else chains) that change the behaviors based on what Shape it's loaded with, creating a Gordian Knot of logic that would be far better served by breaking it out into separate classes. I'll leave it this way now, but if I end up with just about a one to one ratio of SceneItems and Shapes I'll collapse them into one base class.

So I've made my PlayerShipModelShape and my PlayerShipSceneItem, I initialize them at the start of my PlayerScreen and put the Shape in the SceneItem and the SceneItem in the Screen so everything is getting Update and Draw calls as should be. Now I just want to see it on the screen. Alas, the shader is unhappy. The shader for Spacewars is more advanced then I need it to be right now, it's got 14 parameters and while most of them are pretty obvious (world, worldViewProjection, viewPosition, etc. etc.) I don't really want to spend the time to load up all it's textures and set it's params just right when I'm going to have to write my own shader later anyway. I'm not ready for that now, certainly.

I rip out the Spacewar shader and all of it's params and copy some BasicShader using code from somewhere and low and behold nothing shows. Double check the camera location and the ship location... ah, the default Spacewar camera is at z = 500 units, but my ship is 1.5 units across, my ship is there, it's just smaller than a pixel right now. Everything in the Spacewar universe must be much bigger than my ship. I move the camera to 5 units and there's my ship, nice and textured and everything.

Next: Fun With Physics, and Which Way is Up???

Monday, March 23, 2009

On Keeping Things Managable

Before we get started I need to do a little...

[rant]

Lets talk about project organization a little. Not code organization but where the files holding the code go. I know the standard C# starter project puts the initial code file in the root directory, and that's fine for the main code file and files for any other really important code, but when there starts to be more than 5 .cs files (or .c or .cpp or .js or whatever) in the root, it's time to get organized. 

Somebody started the Spacwar game template with good intentions, there are 4 code directories in the project with 15 base classes in them. Then someone else (I'm assuming) came along to finish the project and completely ignored this organization creating 22 more source files in the root, many of them holding classes that are children of the base classes in the folders. This is the kind of thing that makes Build Masters and maintenance programmers openly weep.

Come on, it's not that hard to create a new class file in a folder or drag and drop them into a folder afterwards. It took all of 5 minutes to move all the children classes into their parent directories and create new folders for those without. Visual Studio doesn't care where the files are as long as it can find them, and you'll probably just leave them all open and use Ctrl-Tab or the Active Files drop-down to select them anyway.

A little organization, build masters and anyone who has to jump into you're project late in the game will thank you.

[/rant]

The first code I decided to tackle was the state machine that controls the screen graph used to managed which object(s) are controlling the graphics device at any given time. On start up you're in a Splash Screen state with the SplashScreen object instance being run, then after a set amount of time the SplashScreen will change the state to MainMenu and the MainMenuScreen will take over, and so on.

The sate machine used by the Spacewar template is rudimentary, without transitional states to aid in starting up and shutting down a given state. So I created a new State base class with internal STARTING_UP, RUNNING and SHUTTING_DOWN sub-states. This should aid situations where the entering or leaving of a screen isn't a single frame task, such as when you want to fade out or have a loading screen.

The screen graph itself is interesting, the main game class contains no graphics drawing code at all, though it still sets up and initializes the GraphicsDevice. Instead it passes all Update and Draw calls to an instance of a Screen extending class stored in a currentScreen variable. Thus a Screen class is like a little XNA application all on it's own, it can handle it's own input and logic and present it's own experience. 

Screens can also hold an instances of another Screen in an instance variable called overlayScreen, which will be drawn after it's holding Screen and can itself have an overlayScreen. This lets you create something similar to layers in a drawing program. For example the Spacewar template project has a nice animated background of a nebula in it's Evolved game-play mode, if this had been implemented as a Screen (it wasn't) then it could have been used as a background Screen 'layer' for the evolved game-play, the main menu screen, the ship selection menu, the weapon selection menu and so on simply by making it the first Screen in the graph.

Initializing the main menu with the nebula background would look a little something like this

currentScreen = new NebulaScreen();

currentScreen.overlayScreen = new MainMenuScreen();

So the active Screen graph is simply:

NebulaScreen->MainMenuScreen

I really like this because it can greatly simplify some complex UI situations for a game, for example I'm expecting my game to use 4 layers in play mode:

GamePlayScreen->HUDScreen->PauseMenuScreen->CursorScreen

The PauseMenuScreen and the CursorScreen will be inactive during normal play but easy to 'switch on' when the game pauses, with the game play still visible in the background, and I plan to be able to reuse them both for other menus in the program.

I changed the game classes “Screen currentScreen;” variable into “List<Screen> currentScreens;” to simplify things in my own mind and so Screens need not be responsible for the layer above them.

Note that if you're having different Screens rendering 3D objects then which object is drawn 'in front' of another is up to the GraphicsDevices' depth buffer and not the draw order. So things drawn on a different 'layer' as I've been calling them can still show up behind things drawn earlier if they are at a deeper depth in the buffer.


Next up: In which I pick a physics engine and try to get something 3D on the screen, running headlong into Spacewars 'simple' shader while stumbling around in the dark (or to be more accurate, in the cornflower blue).