Wednesday, December 16, 2009

WPF & Windows Classic Theme Gotcha

The checks/bullets in a checkbox or radio box under the Windows classic theme are colored with the default foreground. If the foreground is white (or transparent), the check is invisible. This doesn't affect other themes.

So, what I'm saying is..

Wrong:

<Radiobutton Foreground="White" Content="Option One"/>

Right:

<Radiobutton IsChecked="true">
   <Label Content="Option One" Foreground="White">
</RadioButton>

Wednesday, November 25, 2009

Migrated to the Windows API Pack, started work on both Z80 and 68k

Continuing my long tradition of changing directions, I spent some time moving all my code off of SlimDX and over to the Windows API pack. So far it's apples and apples, but moving forward I would expect the MSFT wrapper to change it's API's less, respond to bugs faster, and provide better resources when it comes to debugging.

The default help file, however, is only marginally more useful than what SlimDX provides, but at least it'll link back to the DirectX pages on msdn.

Theres' a lot more than D3D in the API pack. I really want a tablet so I can play with the accelerometers and light sensors, though the next toy to play with will be some of the new explorer extensions, especially the thumbnailing.

So I got bored of the framework stuff, and started fleshing out the emulation core itself. I added BCD instructions to my 6502, and XML files to go with them to describe a 6507 and 6510 (6510 is still short all it's IO features), and started both a Z80 emulation, and a 68k.

The 6507, of course, is for atari 2600 support, which seems a pretty soft target. 6510 will be for commodore 64, though the oddest thing is, despite having grown up on the c64, and knowing the thing inside and out, I can't really psyche myself up to write a full emulator for it. But I do plan on adding SID support, and there are SID files that actually run in BASIC, and some that use the VIC chip for timing, so even a fully functional SID player is a 90% complete c64 emulator, so if I went that far why not finish it?

But I think the platform I'm most eager to taget is genesis, just because there seems to be this sense that the 68000 is some kind of monster of a beast to emulate, when I see it as one of the more straightforward CPU's that's ever been released. I shunned SNES for genesis, since the SNES is basically all custom hardware, and any code I wrote couldn't really be reused - and I want to target Neo Geo if and when I get genesis done - emulating it's graphics hardware on the GPU would actually be pretty fun.

The Z80 can take me lots of places too.. SMS, gameboy, playchoice 10 support for my NES emu, and a handful of Big N's arcade boards I'm interested in, like Donkey Kong, Popeye, etc.

I'd really like to go after PCE/TG16, as it's really a pretty neat machine, but am having a hell of a time finding any really detailed technical documentation on it.

Of course, I'll code it all myself, because that's the whole fun.

I think I may be the only person to write any sort of emulator from scratch in any managed language. Everything else I've come across is a half-assed port with lots of hacky pointer-to-array stuff, and buffer.BlockCopy junk.

That's a great way to make a slow emulator, and perpetuate the myth that managed code is slow, though I'm trying to prove the opposite (even if only to myself), and I think I've done that, for the most part.


And, oh yes, I have the WPF content embedded into D3D10 very nicely now, performance is even more than acceptable, and I still have some really good options for further optimization.

Sunday, November 22, 2009

Messing around with UI Automation

This might be a better way to send events to the 'invisible' UI controls.

Another way might be to take the approach used by the 3DTools for the WPF's Media3D api (which is horribly slow and ugly), it actually transforms the visual itself in 2D space, and overlays it transparently over the 3D version of it. Thus, all mouse tracking happens natively. This might not be so slow, since I'm still in charge of rendering.



Forget that stuff.. The trick to getting clicking to work right turned out to be crawling through the visual's logical tree, and setting every ButtonBase's ClickMode property to Press. Then, the MouseUp/MouseClick/MouseDown chain of fake events works fine.

10NES

I've spent the last day gutting and redesigning and rebuilding the UI for my emulator, and I think I finally struck on a UI scheme that will work out the way I want.

Since this is the (at least) 10th rewrite of such, and the solution is now in Visual Studio 2010, and the rendering done with Direct3D 10, I've decided to name it 10NES, which happens to be the name of the fabulous piece of technology which made the NES's power button blink.

My new UI 'technology' has evolved from all my efforts to integrate the emulator into WPF.

Since this projects raison d'etre is allowing users to easily define and apply advanced post processing effects to existing NES games, it really necessitates a very rich UI. From the beginning, I'd used WPF to create my UI pieces, using the model-view-viewmodel approach, as I really see no reason to ever do anything else. The quest really became how to mix WPF with 'snazzy game graphics', in a way which doesn't absolutely crush performance.

The original version of my emulator simply put the output into WPF's WriteableBitmap and threw it on the screen. The performance wasn't awful - the emulator has to render the output in software, of course - but I was at the mercy of WPFs composition engine. When the mouse was moving over any UI pieces, the emulator would choke and stutter hard. Adding any sort of animations to WPF really places a heavy demand on the CPU, I've found that WPF can hit as hard as Crysis depending on the type of machine it's running on. Still - I had all the UI I wanted, and I didnt have to cobble it together out of triangles by hand.

So I started nosing around for another way to draw the ouput, and at the same time was trying to get a version running under linux to be able to run it on my netbook. So OpenGL made good sense. I whipped up a WPF control to host OpenGL (via the Tao Framework) and placed it in there. The main problem here, is that WPF and OpenGL both need low level access to the video card, and they can't both own the same pixels. The upshot is that I can't overlay any WPF content over the OpenGL.

Direct3D would have the same problem, but the WPF team was kind enough to introduce the D3DImage class, which can take a direct3d surface, and composite it inside WPF like any other bitmap. Sounds fantastic, but the rub here is, Direct3D9 doesn't share its junk with other applications on the video card. The surface has to be copied out into regular system ram, and passed over, and reuploaded as a new texture. This would get up to 3 frames per second if I was lucky, and keep the CPU pegged at 100% the whole time.

Microsoft extended this behavior, and created Direct3D9Ex, which does share its junk with other applications on the video card - however this is only possible with the new driver model introduced with Vista (WDDM), and only with Direct3D 10 capable hardware. I was still on Windows XP, and gave up pretty quick trying to make it work. Also, D3D9 really is a hot wet mess of bullcrap.

Around this time, however, I decided to play with the XNA framework, and used the same sort of tricks I used to host OpenGL inside a WPF window to host a XNA 'game' inside the window. This was clunky, XNA is heavily tied to this Game class, and this Game class is a Form. So every time it would start up, the form would display quickly, my application would steal away its draw context, and kill it. There was also frustration trying to stop the Game class from drawing its frames at its own fixed times (ie; call game.Tick() myself), which lead to lots of choppy output with the game object calling Draw out of sync with my emulator presenting something to draw. Also, the XNA framework would only expose a Direct3D9 surface, not a Direct3D9Ex, so even if I upgraded to Vista I'd never be able to overlay content with any sort of performance.

There was a side jaunt into using SDL around this time - it turned out to be the fastest option on my netbook and its smelly hippy linux OS, but I didn't spend much time on it.

So my main focus fell back on to OpenGL, it offered the best cross platform support, and an API I've long been familiar with. It's verbose, and all that verbosity can add up when each verb has to be marshalled across COM boundaries, but it was OK. About this time I started playing with fragment shaders to index and color the pixels from my emulator, saving a whole bunch of operations on the CPU side.

Then summer happened and I stopped working on it.

After summer happened, I decided to upgrade my home machine to Windows 7, because it kicks serious ass, and is the first Microsoft OS to actually perform *better* than the last one, on the same hardware. I got sick of my ancient radeon x800, and invested 40 bucks to upgrade to a newer Direct3D 10.1 capable card.

I went back to work on my emulator again, but there was a problem. OpenGL was broken now! I couldn't create a framebuffer on my home machine, I couldn't create a fragment shader on my work machine. Whether it was broken at the Tao bindings, the OS, or the driver layer I didn't bother to figure out. I could now embed Direct3D content into the application, so I went to try that.

Hosting Direct3D9Ex inside WPF performed as well as I expected it would.. That is, as well as WPF's WriteableBitmap did. I could overlay WPF content, the compositor would take over and make the emulator stutter, but only when I was trying to use stuff. Having a multi-core CPU really puts an end to it, but honestly, thats a lot of hardware to throw at such a simple app.

So I decided to mess around with Direct3D 10, and started falling in love with the API almost immediately. It's so spartan and free of nonsense, you send your shaders to the GPU, then you stream in your resources and geometry, and the video card does all the work. While WPF's D3DImage cannot take a D3D10 surface directly (why not?), you can render a D3D10 surface in D3D9, then host that in WPF. I never actually tried that, though, because at that point I was sick of competing with WPF for drawing time.

Around this point, I started really messing with pixel shaders, and moved all of the drawing logic out of my emulator and onto the GPU. I originally did this as an expiriment to see if I could use the GPU to accelerate my emulator - and it worked, more than halving the CPU load. But there's a cool side effect.

Now the GPU knows everything about the nintendo, and can layer in whatever effects it wants in a game-context aware manner without costing the CPU even one measly clock cycle.

This immediately became what I wanted to do (real water and real clouds in 1943, enhanced lighting effects to really bring home the psychological horror of war in Contra, and so on), so I realized that all the other expirimentation and nonsense had to end, I actually have a goal now, and something I want to release instead of just dork around on when I'm bored.

So I thought about what I actually wanted from WPF. I wanted the ease of creating rich UI components, the data binding, some of the automation features. I didn't want the compositor, however.

So I struck upon the idea of loading my XAML out of thin air without any host container, and somehow sending it to Direct3D 10 - going completely the other way. This way I'm in control of when and how the UI is drawn, even in full screen exclusive mode, and going fullscreen in any of my previous schemes was a major PITA, as to do it properly required tearing down whatever my renderer was, destroying the WPF window, then recreating the renderer with exclusive access to the video hardware.

So to implement this, I created a sublass of Direct3D10's Texture2D class (well, SlimDX's class that exposes it), which held onto a WPF visual. When I want to update the visual, I Measure() and Arrange() it to the size of the texture, render the visual to a RenderTargetBitmap, and use the CopyPixels function to send the pixels in question over to the D3D Texture, which can then be composited.

It's a simple scheme, and works well, and performance is really good, but the rub here is interactivity. My WPF controls dont actually exist on the screen, only a picture of them does. So I need to translate mouse and keyboard events back to it. This part gets tricky, and so far involves:

1) finding out which texel on screen was clicked, by finding the intersection of the ray created by the point between the mouse click and the cameras location, and the geometry, then finally the texel on the intersected face in that geometry. Ok I'm cheating so far since everything is 2D and mapped into screen space, but eventually I'll need this. This isn't a big deal, I can copy and paste this out of any old "how to make k-rad games at home" book.

2) Taking the texture coordinates from step 1, mapping them to 'WPF visual' space, then doing a HitTest against the visual to find the child element affected by the event.

3) Using the RaiseEvent() method on that element, to raise the appropriate event.

2 and 3 seem simple, but they're a pain in the ass, and are involving a lot of crawling up the visual tree to find something I can actually interact properly with. For instance, if the hittest hits a button, it won't necessarily return a button. It will return the ButtonChrome decorator which is over the button. So sending ButtonBase.Click to this won't work.

Crawling up the tree to find the parent of type ButtonBase and calling Click on that will work, however, it won't fire any Command bound to that ButtonBase, so I check to see if there is a command bound to it, and if there is and it's CanExecute returns true, I Execute it. Ditto with Clicking on a checkbox, it doesn't do what I want it to do, so I manually set IsChecked - which properly updates its bindings.

So basically at this point I'm writing a whole lot of code looking for specific UI elements, and calling specific methods on them to get the functionality I want. I have a feeling I could make all this trouble go away if there was a way I could make my own virtual MouseDevice, because I think the fact that I have to attach the default physical mouse to the events is my main problem - WPF's input framework is smart enough to know that the mouse really isn't over that control. Unfortunately, I can't just sublass MouseDevice.

But, for all the hassle, it is working better, I'm in complete control of what to draw, and when, and I can interact with the UI with the emulator running, and not have it miss a beat. In the end there are really only a handful of base types I need to be able to interact with, so it wont be that bad.

It will however, be heavily tied to the windows platform, so I had two other ideas:

1) I'm not sure if it's possible, but if I could do the same thing, but use Silverlight 3 in the background, instead of WPF, then the moonlight project would give it longer legs.

2) I could create my own state manager and classes for UI primitives (buttons, checkboxes, etc), and basically use WPF during the build phase to create a set of sprite sheets for these UI primitives, by rendering each control in each state. Then WPF itself just becomes part of the content creation pipeline, and doesn't exist in the final product (only the pretty visuals). This would be very cool, as I could reuse the same UI on any platform I chose, and create a native linux application that doesn't look like a bag of smashed up frog anuses. I think this (or something like it) will be the way I want to go, eventually.

So now that I've outlined what I've done and what I want to do, I can refer to this post every time I stray off on one of my tangents (like this entire post).

10NES needs to be released at midnight of 2010, or else everything is lost and I will need a new clever name! So I got work to do.

Thursday, November 19, 2009

This is great

http://windowsteamblog.com/blogs/developers/archive/2009/05/18/windows-7-managed-code-apis.aspx

Managed access to Direct 3D 10, 10.1, 11, DXGI, DXUT, Direct2D and DirectWrite. From microsoft.

With documentation and useful samples too! And helpful microsoft folks to answer your questions, instead of an IRC channel full of douchebags.

Now, hopefully this works well, because I can't wait to toss slimdx out.

Thanks msft, and not to look a gift horse in the mouth... XAudio2 please. OpenAL (through tao framework) is a little CPU heavy for my taste, and I don't want dependencies on slimdx anymore.

I guess I could just move the little bit of sound code I need over to unmanaged code (as well as the waveform generation, bandlimiting, and resampling), but I've already splintered this project in too many directions at once.

Wednesday, November 18, 2009

Burned by this:

byte b = 14;
object o = b;
int? i = (o as int?);


i is null, and now i.Value fails, and GetValueOrDefault returns default.

This is weird because a cast from a byte to an int or backwards is always valid. A byte *is* an int. Apparently this isn't the same for nullables.

So watch out what types your data readers and whatnot are actually returning.

(Note: this is happening because of a crazy stored procedure returning TINYINT when it should have been BIT, or even INT)

Tuesday, November 17, 2009

D3D10 Sprites, or how I learned to stop worrying and love undocumented and misleading CLR wrappers

So in my noodlings around the world of D3D10 with the illustrious SlimDX as my guide, I came upon the need to pop up some debugging type info on the screen. Though sprites are somewhat of an outmoded concept, they are a convenient way to throw up a rectangle full of gunk, so I figured I'd go that route.

Now, the sprite's I remember are a rectangle defined against screen coordinates, with junk drawn in it. In dx9 you would declare them with the sprites offset and size, in pixels.

But this is D3D10, and I'm using it through SlimDX, which has no documentation or method of support (other than going on IRC so smug wanna be know it alls who can't answer your question mouth off and pretend that the question is beneath them -- note to library developers, do not have a IRC channel, ok? everyone on IRC is an asshole and you will just lose support)

So anyways, I find a nice Sprite class, so I go to new one up. It wants a Device, and that's cool - I got that, and it wants an int called 'bufSize'. What is bufSize, you ask? That's what I asked!

So the Sprite is only the interface to D3D for drawing sprites, the sprites themselves are defined in a class called SpriteInstance (which is how SlimDX wraps the sprite structure in D3D10). Sprite class can draw an array of SpriteInstances, either immediate mode, or buffered mode. That bufSize is how many sprites it should buffer in buffered mode.

So once I realized I needed to make a SpriteInstance, I went ahead and new'ed up one of those. It takes three paramaters, one of type ShaderResource - which binds it to the texture it's to draw, and two more of type Vector2D. The parameters are called 'coordinates' and 'dimensions'.

So I foolishly assumed coordinates and dimensions were the sprites location and size, in pixels, just like in the old APIs. Boy oh boy, I couldn't have been more wrong!

After examing the D3D10 api, and realizing that SpriteInstance mapped back to the sprite structure, I discovered that the texture you are attaching the sprite to is a sprite sheet. Coordinates is the offset in the sprite sheet to where your sprite starts, and dimensions is the dimensions of the image data in the sprite sheet that you want. Both are specified in texture coordinates (0 to 1).

So that's fine and good, now I have a sprite square in the middle of the screen, drawing what I want. But that's not where I want it, and that's not the size I want it to be. So now what?

Well, you apply a Transform to the SpriteInstance, of course.

By default the Sprite class draws against screen coordinates (which go from -1 to 1). The default center of a sprite is actual center (width/2, height/2), and not the top left corner like one might be used to. So, translating the sprite by -1 on the x axis effectively puts the center of the sprite on the edge of the screen, so half of it is clipped.

You can set the view and projection matrices on the Sprite class to change the default behaviour, but since my whole app is dealing in screen coordinates anyways, I never bothered.

So, lessons learned about Sprites in D3D10:
- designed to work off of a sprite sheet, created with coordinates and dimensions based off texture coordinates in said sprite sheet
- by default they live in screen space
- good for quick little popups and such, like the MessageBox.Show or Console.Writeline of the D3D debugging space, but if you really need "sprites" in your game or app, you are better off drawing and texturing quads by yourself. This way you will:
a) be doing the work on the GPU,
b) have the flexibility of applying whatever shader effects you want
c) keeping the ability to rearrange your geometry down the road when you decide those rectangles don't have enough zazz

Tuesday, November 10, 2009

Wrongkey Wrong

This is wrong:



My new GPU accelerated NES emulator

So I finally set about dumping tile rendering and sprite rendering tasks for my emulator onto the GPU, with some interesting hurdles.

I've been laying the groundwork for doing this for awhile, as I've rearranged how CHR ROM/RAM work with an eye towards offlining them to the GPU.

For tiles, the basic routine is:

The emulator sends CHRROM and RAM to the video card as a texture.

The emulator creates a 'pixel information map' which it sends to the video card, also as a texture.

The pixel information map is essentially a texture in which each texel represents an onscreen pixel, but rather than send the decoded color data, I fill the values with the information I will need to properly decode the pixel. It's a sort of deferred shading for NES. A pixel information "pixel" has 16 bit components (d3d surface format R16G16B16A16_UNorm) and looks like this:

a: ppuByte0/ppuByte1 - 8 bits each. I pack the current values of the bytes representing these two registers.
r: CurrentXScroll, CurrentVScroll - 8 bits each
g: CurrentPaletteID - 8 bits for palette ID, which identifies which shadowed palette this pixel is associated with (more on this later), 2 bits to identify the current nametable (0-3), 6 bits for future expansion
b: CurrentBankSwitchID - I allow this to have the full 16 bits.

To decode chrRom, I use an array of 16 ints which represent the start position of each 1k segment. As bankswitches happen during frame rendering, I push these arrays into a stack, and it is that stack which is sent to the GPU each frame. This way the chrRom only needs to be sent once, and can be 'mapped in' to nametable or pattern table ram virtually, on the graphics device.

There could be hundereds or even thousands of bankswitches during a frame, it's all up to whatever shenanigans the cart's mapper can pull. If the cart was switching every pixel, it could be 61440 entries.

Palette ram is shadowed via a similar scheme, every time it is written to during a frame, all 32 entries are pushed onto a stack and the 'current palette' index is what goes to the video card, as well as the palette stack as a texture. This doesn't happen as often as bankswitches, and when it does, there's not a lot of writing going on, since the programmer has to disable the NES PPU during rendering to be able to fiddle with palette ram. However, it is done in some more advanced games and in a lot of demos.

Nothing happens for tiles in the vertex shader.

The pixel shader's algorithm is like so:
pixelInfo, paletteShadows, bankSwitchShadow textures must all be sampled in point mode! Iterpolating user-specific data like that is not going to work.

1) fetch and decode pixel information texel, something like this. Remember your data is normalized into the 0 to 1 space, so you need to make it an integer again.

2) check ppuByte1 bit 3, if it is false, set pixelindex to 0

3) else, get the current nametable, apply x and y scroll values, calculate tile index

4) fetch the two bytes representing the tiles line for this Y coordinate, extract the bit from each byte representing this tiles lower two bits

5) fetch the associated attribute byte, shift it to the left 2, and or it with the pixel.

6) I now have a 4 bit pixel representing an entry in the nes' internal palette. Look that up in the palette texture, based on it's current palette id

7) the result is now a hsv value in the nes' own special format, which is converted to rgb using a special function i wrote called DecodePixel, which decodes the nes' hsv information, gets its color information, converts it into YIQ format, and then finally to rgb.

This last step does what the NES does, though may seem redundant when a simple palette lookup will get you to the same place. There are good reasons to let it walk throug the color spaces, however. For one, this allows me to tweak color, tint, saturation, brightness and contrast, just like i would on a real old timey tv set. It also allows for any other image processing effects to take place in these spaces. Blooming and lots of popular modern day effects start with a luminace map, and I'm essentially getting one for free.


Note on performance: tiles aren't too bad, but there is a lot of texture sampling going on. 1 for the pixel information, 2 for the pattern entries, 1 for the attribute byte, and 1 for the current palette. The only needed flow control is to test whether tiles are to be drawn at all, and the 5 texture samples aren't a big deal at all.


Sprites are a whole other story, and I'll tell it a whole other time. In short, if i were to take a per pixel approach, that gives me no choice but to evaluate all 64 sprites to find one visible on that pixel. So thats up to 128 texture fetches for the pattern entries alone, let alone another 64 for the attribute bytes. It can be done, but for frig sakes this isn't Crysis and I want to leave some GPU left over for the purposes of pure, raw, zazz. Not only is this a perf problem, there's also no way for it to be technically correct, as there's no way to "only evaluate 8 sprites per scanline". The best I could do is to evaluate 8 per pixel, but the pixel next to it could have a whole other 8 sprites.

I'm still working on the sprites routine, and I'll follow up with that once it works.

Sprite and tile pixel fetching were the two biggest heavy hitters in my emuation, so I'm eager to get this working, so i can slice and dice out all of the legacy code and see how much more badass it is to use 320 vector processors instead of one dumpy old pentium 4.

Wednesday, October 28, 2009

Weird WPF WindowStyle behavior

I have a WPF window full of some WPF3D elements, and I decided when I maximized my WPF window, I'd like it to do more a 'fullscreen' display, and remove the various window dressings, so I did this:

protected override void OnStateChanged(EventArgs e)
{
if (this.WindowState == WindowState.Maximized)
{
this.WindowStyle = WindowStyle.None;
this.ResizeMode = ResizeMode.NoResize;
}
}

This takes the borders and titlebars off of my window, and it goes ahead and fills up the screen, respecting the windows task bars "auto-hide" and "on top" properties.

Only, the performance when animating my 3D objects was suddenly horrible, my CPU usage went through the roof, and the smooth animations i did have suddenly were < 1 FPS.

Changing the WindowStyle.None to anything else (SingleBorder, for example), didn't exhibit this problem.

I'd seen this terrible behaviour before, when playing with transparent windows - when you set WindowStyle to None, and AllowsTransparency=True, and all of your windows content is blended with the rest of your desktop - forcing a slow software rendering which has to test each and every pixel as it renders your desktop back to front, which is a heavily fill rate limited operation. It's a cute effect, but slow and worthless.

So I wondered, could something be setting AllowsTransparency=True internally, when I flip the WindowStyle = None switch? I placed AllowsTransparency="False" in the Window element of the .xaml file, and there was no change in behaviour.

I tried setting AllowsTransparency=false in code, right after changing the WindowStyle, and this throws an exception - as this property can only be set before the window is shown. So I put this line in the constructor, before InitializeComponent().

This fixed the issue - on my home machine - however it appears to remain on my machine at work.

This is weird all around. Explicitly setting it in code before InitializeComponent works, even though it is already explicitly set in the xaml - which does nothing. It works on a 32 bit Windows 7 machine with a ATI 4650 agp (WDDM 1.0 vista drivers), but seems to have no effect on a 64 bit Windows 7 machine with an NVidia card (WDDM 1.0 ).

I haven't peeked under the hood, but I'd chalk this up to Windows 7 growing pains, probably a bug in one or the other driver sets. This isn't the right way to do fullscreen with my emulator anyways (i should give the monitor over completely to DX), but certainly weird.

Thursday, October 22, 2009

Shades of Gray

More of a note to self than anything else. After lots of head scratching with a shader .fx file I wrote, which compled fine with fxc, but would puke when loaded with the effect.FromStream() method in SlimDX, I eventually hit upon resaving it as ANSI (from unicode) and it worked.

Looks like this isn't unique to SlimDX, but an issue with the underlying CreateEffectFromXXX call to D3D9.

So:

UTF-8, Unicode - NO
ANSI - YES


Update: Visual Studio saves Unicode. A real fine how do you do from redmond.

Tuesday, October 20, 2009

WPF ResourceDictionary Insanity!!

So the short backstory is that I was messing around with the Media3D library, and had a bunch of .xaml files containing Model3DGroup's as top level elements. So I decided to merge them all into a resource dictionary.

I originally had some code to pull the models out, that looked something like this:

var p = TryFindResource("Sword") as Model3DGroup
if (p!=null) icon.Model=p;

Now, the xaml was littered with x:Name="" attributes, which are not allowed in a ResourceDictionary (but are allowed in a local Resources group). So the behavior I expected from a Try.... method would be to return null, rather than to get an exception.

However, it did throw an exception because of the x:Name="" attributes in the code. I stepped past the code in the debugger, and let it continue running because I wanted to see something else, and lo and behold - my model was there on the screen. The one that threw the exception.

So, creation of the Model from the resource throws an exception, but doesn't fail. TryFindResource therefore does not return null.

In the end, this crazy code works, and the type string will pop out the console, instead of the null you'd expect if the resource sort-of fails, even though the dictionary entries are malformed:

Model3D p = null;
try
{
p = TryFindResource("Sword") as Model3DGroup;
} finally
{
}
Console.WriteLine(p);


This is not recommended, of course, and for the record I cleaned up the xaml in my dictionary.

This isn' t specific to the Media3D elements, it works the same for any old UIElement. I'm not sure what other sort of exceptions get thrown but don't cause the resolution to fail, I'm not sure if this behavior is by design or a bug - but I wouldn't rely on it. If the resource was referenced in xaml somewhere, the exception would cause the app to fail - even though the object that threw it is doing just fine.

One thing I do know, is that having to wrap a try around a Try___ is something that really grinds my gears.

Monday, May 18, 2009

FishbulbNES

This is what I've been working on lately:

http://fishbulbnes.googlecode.com

It's a NES emulator written in managed c#.

Having recently purchased a netbook, I decided it needed a mono (gtksharp) build, so am working on that, more to the point a common UI framework to bridge the gap between the existing WPF based code and the newer Gtk# based stuff.

More information to follow as it progresses towards a binary release, however updates here are likely to be sparse, though I intend to try and flush out the projects wiki with as much info as I can think of.

Blogging really isn't my thing. I'm a coder.