Category Archives: Developing

Posts related to the development of software.

XNA Audio Troubles…

So as I am adding sound effects to the game, I noticed that over time it would consume more and more memory and start to slow down. Another thing that would happen is that the audio would drag out and get sloooooooooooooooooow.

Now, I consider myself a pretty decent developer and I took steps to ensure that objects are reused where possible, instead of being new-ed up over and over again (.NET may be an environment that takes care of a lot of the dirty work of managing memory, but you still need to remember that it is a limited resource and code accordingly).

I should have taken the slowed sounds as a hint, but I went down the path of tracking objects and Garbage Collections to make sure that the object caches were healthy and the few objects that are destroyed and recreated are cleaned up in an orderly fashion as well.

After going though all that, I realized that the XACT-related classes are the culprits. If I turn off the sound, memory usage remains stable, even after running the game (in its attract mode) for more than 12 hours. It normally started to get slow after 45 minutes or so.

Looking up memory-related issues with audio in XNA, I read that other people have a similar problem, but only if they are not calling Update(…) in the Audio Engine during each frame. However, I AM calling Update(…) and am still having the problem. I even added additional calls to Update(…) from other locations in the code, but to no avail.

I even tried manually managing each Cue instance by storing them in a collection when they are played and then Dispose()-ing them when they are done playing. No difference.

I have temporarily resolved the issue by manually cleaning up and reallocating the Audio Engine at certain times during the game — between the end of a level and the start of a new one, and whenever the Title Screen transitions to the next screen when the game is in attract mode. Kludgy? Yep! But it works, and until an updated version of the XNA Framework is available that possibly fixes this bug, it will have to do.

Oh, and I used The GIMP to create a starburst graphic and am briefly displaying it as a “superluminal flash” (the warp equivlent of a sonic boom) when the player’s ship goes to warp. It looks sweet, even if I do say so myself! 🙂

XNA Wonderfulness

So having some extra time on my hands, I have started playing around with XNA Game Studio again (version 3.0 this time). I picked up a shooter tutorial created by PHStudios which gives you a functional, but very simple game.

I started about the shooter that I have always wanted to write. It was going to look and behave much like an old arcade shooter. I wanted the details of the game to be just like an arcade game. For example, when it first starts, it will show a garbage screen quickly followed by an alphanumeric test (with some sprites in there), grids and colors, and ROM and RAM test (with successful results, of course), and then jump into attract mode waiting for someone to coin it up.

I have started this dream by taking the shooter tutorial and making extensive changes to it. I have since extended it by adding the following features:

  • Background Stars with multiple behaviors and effects (stopped, moving, warp, etc.)
  • Improved collision detection (only objects “near” each other are checked, and multiple collision rectangles are used for better checks)
  • Multiple enemies, including mini-bosses and bosses
  • Bonus items that “fall” that give you health, weapons, points, etc.
  • Additional weapon types
  • “Buddy” ships that dock to your ship for additional firepower
  • Auto-pilot/behavior for the enemy ships (i.e. what they do when the come on screen)
  • A “story” manager that manages each game level (i.e. what ships appear when and what their behavior is, also handles “waves” of attacks)
  • Explosions
  • Sound
  • A console that you can enter commands on to change/adjust the game’s behavior
  • And more!

Some of the features could be useful to others (like the background star, console, input manager) and I will release them free-for-use once I am done.

I am starting to think that I may be able to actually sell it once it is complete. I think that the retro arcade behavior of the game may appeal to some, and I hope that I can make it fun and challenging enough to get people interested.

Oh, and before I forget, there is a really nice free tool called (sfxr ) that can create sound effects. I am using it for the retro-like sounds in the game. I have also created a bunch of different types of sounds that I will likely also release free just as a time-saver to others.

Another Downside of Browser-Based Apps

I once again find myself having to use a web-based application. This is a often just a fancy name for a bloated set of code that provides a little more functionality than what a set of CGI scripts could provide.

The beauty of CGI apps was that they were often very succinct – they were used to process relatively small amounts of data that were entered in a small form.

Today’s applications give you multiple form fields and expect you to enter larger amounts of data. Some even play fancy DHTML tricks to allow you to dynamically add more fields so you can enter even larger sets of data. Nice, huh?

But what happens if that server goes down while you are entering all that data? Or if the people operating the site do not take into consideration just how long you can be entering data into one of their pages? You usually do not know about this as you are entering the data, and usually are not aware that you are about to lose all of that data you just entered until you press [Submit] and get back an error — too late!

Now, I do not script web pages, so I could be wrong about this… But we are in a world where we can play all kinds of fancy AJAX tricks, so why the HELL do web scripters (not developers, that term is reserved for people that do more than just write fancy client-side script) not just put a little AJAX code that keeps hitting the server to do things like (1) make sure it is still alive, (2) check for impending session timeouts, (3) and other stuff that make web apps appear more robust and professional?

Having a warning that the server has gone down before I submit some data would be great – I could copy my data to Notepad and then get it back when the server comes back. Now, this would be harder for pages that contain too many fields, but that is another indication that your app needs a better platform.

IIRC, even 3270-based form-style applications could handle server disconnections better than today’s equivalent browser-form based applications — at least they had an icon on the status line to indicate session state! That is the ironic part about this… not only did we take a giant backward in UI evolution, but we completely missed the robustness that those older applications had.

Hurray for progress!

Remember kids – while lots of applications can work in a (D)HTML/AJAX browser-based interface, not all of them can work WELL in that interface. Read up on what happened when someone tried to port Lotus 1-2-3 to a 3270-style interface… Wanna guess how well that went?

Assumption is the mother of all Fuckups…

So I find myself in the middle of a posting frenzy regarding a story on The Daily WTF: http://thedailywtf.com/Comments/A-Problem-at-the-Personal-Level–More.aspx.

The point of my posts was that by withholding the assumptions made by the interviewer with his “one right answer”-kind of question, you put the interviewee in a bad position. (The link above explains the scenario.) IME, in the absence of specific details, one is likely to draw upon their experience to formulate a solution.

So when I read that according to the interviewer, the one right answer was to use a move operation to relocate the complete data file to where the watcher was looking for it. Of course, my first question was if the move was atomic or not. Far too many posters claimed that it always(!) was, other more intelligent ones indicated that it should be.

So my first post there was asking about different filesystems. For example, the average Linux filesystem can support many different filesystem types: ext2, ext3, ffs, UFS, RiserFS, FAT32, NTFS, etc., and can have filesystem locations on different partitions, drives, and even network locations… So what if the source and destination locations are not on the same device/partition? Are the moves still atomic? My experience with both Linux and Win32 filesystem driver code tells me no, so that is what I posted, indicating that the assumption that everything is on one filesystem/partition must be known.

This post started to draw out lots of interesting people… One started talking about how the POSIX specification states that renames (and moves?) must be atomic, but did not know enough to realize that some systems may play fast-and-loose with the specification (Hello, Windows!). Another started talking about how the rename(…) syscall (the syscall!) is atomic. Well duh – most C-style functions are… it may return to the caller only after the rename (or move) is complete, but that does not mean that the behind-the-scenes action is atomic to an outside (filesystem) observer.

It amazes me how so many people just do not “get it.” Maybe I am not just a good communicator…

Or maybe these people really should stay away from a keyboard as much as possible… 🙂

Software – Robust vs. Works

I got myself involved in a thread on The Code Project Forums that lead to the idea of Robust software, and one of the posters said:

I call an application robust if it’s in use in the real world for a lengthy period of time and no data has been lost and no one has experienced downtime as a result of a bug in that app.
That’s the bottom line in the real world.

(Obviously, this developer never accidently opened a binary file in Notepad — which it will let you do, and is a very quick way to end up in the “data has been lost” scenario.) To which I responded why I was glad that he did not live in my real world, because the criteria he provided was a very low target in my opinion.

However, upon further reflection, while I still may not want developers like that in my world, I would just love to live in his! Imagine – not having to worry about things like data consistency across distributed systems, ordering, synchronization, and distribution of asynchronous events! Not having to think about the amounts of money that could be affected just by one little bug in the code somewhere because hey! – it did not lose any data, and the trader kept on using the system, right?! That would be heaven!

It would fan-effing-tastic to be able to use Notepad as the model of what “robust” means! When was the last time you used Notepad? How long has it been around? Ever lost data or experienced downtime due to a real bug in it?

Imagine how much more simple developer’s lives would be if that were the case…! Hell, at least I know that I would sleep better at night!

CString misuse #2

Here is another one:

/////////////////////
LPCTSTR cpString = _T( "This was some string message..." );
//
// Later On In Code...
//
CString str( cpString );
char cChar = str.GetAt( 0 );
/////////////////////

If you write code like this, stop now and back slowly away from the keyboard – You’re Doing It Wrong!

The developer here is using a string object for a very simple operation. This is the kind of think people talk about when they say something like “using a shotgun to kill a fly”.

Extracting characters from a string (an array!) is a very basic operation – it is something we learn in our first C/C++ class or read about in our first C/C++ book. This is not something that you need a heavyweight class to help you out with.

Extracting the first character from cpString is as easy as doing one of the following:

/////////////////////
char cChar = *cpString;
//
// -Or-
//
char cChar = cpString[ 0 ];
/////////////////////

Remember – constructing and initializing an object always takes longer (i.e. has more overhead) than not constructing and initializing one. Think about wether or not you really need an object before you create one. If you can get along without it, see if doing so improves things.

For reasons mentioned in a previous post, in this case, the code is better without the CString.

CString misuse #1

This is the first of many examples of ways to misuse and/or abuse MFC’s CString class. While this example (and following ones) are specific to MFC, they likely apply to all string classes (mutable or not). Here is the offending code:

/////////////////////
CString str( "First Part Of Message\n" );
str = str + "Second Part Of Message\n";
str = str + "Third Part Of Message";

MessageBox( str );
/////////////////////

If you write code like this, stop now and back slowly away from the keyboard – You’re Doing It Wrong!

First, the developer is adding (concatenating) strings together, but these are static/constant strings! They always add up to the same string, and as such can be made into a single constant string:

/////////////////////
"First Part Of Message\nSecond Part Of Message\nThird Part Of Message"
/////////////////////

So at a minimum, the start of the code should read:

/////////////////////
CString str( "First Part Of Message\nSecond Part Of Message\nThird Part Of Message" );
/////////////////////

Why not add up the strings separately like the original code did? Two reasons – overhead and exception opportunity. Each use of CString::operator+(…) can result in dynamic memory operations (allocation and deallocation). So you are looking at six potential heap operations (three potential allocations and deallocations including destruction, although in release builds of CString, the number of operations is less). Each operation has the potential to raise an exception and in the absence of per-thread heaps, can effectively bottleneck a multi-threaded application to the performance of a single-threaded one because the heap operations have to be serialized.

So by manually putting the strings together we have reduced heap operations from 6 to 2 – one allocation and one deallocation. That is a pretty good improvement, but we can do better!

The MessageBox(…) function does not take CStrings, it takes pointers to constant strings (LPCTSTR). So why is a CString needed here at all?

/////////////////////
MessageBox( "First Part Of Message\nSecond Part Of Message\nThird Part Of Message" );
/////////////////////

This final version of the code is simpler, will execute faster, and is more robust. Sounds like a winner to me!

Note: Some of you may be thinking about the preprocessor’s ability to automatically concatenate static strings. Yes it does, but it cannot automatically coalesce the above strings because they are separate – they are being passed (separately) as parameters to a function. If the + operator was not present in-between the parts of the string, they would be coalesced to a single string, but the unnecessary CString would still be there.

You’re Doing it Wrong!

Having been inspired by the amount of photos on the internet showing various forms of spectacular failures (http://www.doingitwrong.com), ranging from failed bunny-hops to the most graceful faceplants, I thought that a coding-equivlent of it might be worth trying out.

To that end, this section will cover little snippets of “wrong” code found in the wild. Unlike the photos, where the failure is usually fairly obvious, the failures present in the code snippets are not always as obvious, so a small discussion explaining the failure will always be present.

OK – enough of the BS…! Let’s get started!

Amazing thing with Test Driven Design (TDD)

At my current place of employment, we had someone come in to talk about some Agile development processes.  One of them was Test Driven Development.  As an example, the presenter explained how scoring works in bowling and then asked the audience to create the code used to implement the scoring.

To make a long story short, both he and the audience started with a brief design phase(!) to design something to do something as simple as keeping score.  I thought this was very interesting, even when hearing some of the other developers as they started to design a multi-level hierarchical design for KEEPING SCORE!

I was reminded of an older idea/post that I had regarding the problems when you combine too much formal education with too little practical experience and throw the resulting person into a production-level software project.  Just like when you give a small child a hammer, everything looks like a nail.  When you give a lesser-experienced developer the task of designing something, you are more likely to get an over-design of something that does not reflect reality and tries to attain perfection instead of practicality.

(That post can be found here:
http://www.jrtwine.com/blog/?m=200407.)

What scares me is that some of the developers that were helping along this heavyweight, over-engineered design may now be responsible for new development efforts. I have always thought that you need to be a coder before you are a developer.  That you need to understand how and why things work in order to make better use of them.  As such, I shudder at the thought of having a group of lesser-experienced developers hitting everything they see with the same design hammer.  Especially having seen first hand what they are capable of with something as simple as a scoring system!

And people wonder why I believe that managed environments contribute to the dumbing-down of the modern software developer…

What is “good software”?

As I am reading a thread on WorseThanFailure (Our Dirty Little Secret), I see a post by “VGR” that states that companies do not recognize “good software”, but rather “finished” or “not finished.”  This is an interesting point.

But a question I would like answered is this – what exactly is good software?  How does one decide that this bit of software is good and that one is bad?  More to the point, since software starts with source code, how do you decide that this code is good and that code is bad?

I have grappled with this question myself, as I am sure others have as well.  I believe I know what good code is.  My education, experience and wisdom are my guides.  But what I believe to be good code is different from what another developer believes to be good code.  Another less seasoned developer may think something else, just as a more seasoned developer might.  So who is correct?

I believe that part of the problem with software today is that there are no common (or otherwise shared) standards for what constitutes good software (or good code which is where good software starts), excluding obvious things like “does not crash” or “does not corrupt data.”

So what exactly makes good code?  Is it…

  • Code that just works, or code that works well?  And what is the difference, if any?
  • Code that is Declarative, Imperative/Procedural, or just well commented?  Is it a combination of the two or all three?  And to that point, what exactly does “well commented” mean, anyway?
  • Code that uses encapsulation as much as possible, because (of course) encapsulation is “a good thing”, or is it code that selectively decides when it is advantageous to do so?
  • ??? What else?  I am sure that many other developers have other criteria…

For each of the items above, you can find developers that will argue for one thing or the other.  Worse, you can also find academics with relatively little to no real-world development experience doing the same, cultivating other future developers with the same thoughts!  This is good or bad depending on your point of view.

So how do we solve the problem?  I am not sure that it can be solved.  Software development is both art and science, and that art part is the killer.  Art is very subjective, and one person’s Picasso is another person’s misaligned jigsaw puzzle.  We may have to learn to all just get along here.

Why Performance is Important

When discussing topics like optimization and performance, there are far too many developers that either believe that performance is not important(!) or that the things taken to optimize the performance of something somehow magically results in making that system less robust.

For the first point, I can not image any developer that has ever uttered the words Damn, this thing is slow regarding their computer or a particular software application running on it, ever thinking that performance is not important. By the fact that you are complaining about somethings performance, that means that performance is important. Or at least, important enough to complain about.

For the second, there are lots of ways to optimize something, and none of them have to directly result in reduced robustness. One of my favorite examples, which is to prefer stack memory over heap memory, can actually improve the robustness of software – it reduces the possible places where exceptions can be raised and thus lessens the chance for exception mis-management to cause problems.

One of the things to remember before opening your mouth to say that performance is not important is to remember that your compiler still optimizes things to the best of its ability. Newer generations of compilers often offer more and better optimization capability as well. Why is this? It is because performance is NOT important, and the compiler writers wanted to just waste time?

When a new processor architecture is made available, manuals that detail that architecture are produced that often specify the best way to utilize that new architecture. Cache utilization, multiple execution engines, out-of-order execution, register allocation, store/load stall scenarios, etc are usually covered in great detail so that all of the capability of that new architecture can be used to its fullest potential.

Again, was that material written just to waste time, or does someone out there know something that you do not – that again, performance is important.

One of the things that today’s developers may often forget is that while their software is running on better hardware, it is also running along with other software applications. For you Windows users, have a quick look at your Task Bar, and SNA area (often mistakenly called the tray). How many applications are you running? Have a look at the process list in Task Manager and see how many processes are really running.

Now compare that value to how many applications you were running simultaneously on previous versions of Windows – 2000, NT, 9x, or even Windows 3.1. As our hardware gets better, we expect to be able to do more with it. But when that many applications are competing for shared resources (CPU, memory, etc.), the specter of performance once again rears its ugly head.

Just like writing device drivers takes a different discipline than writing desktop applications, writing software that has to execute in a shared environment is different than writing software that runs in a dedicated environment. The average desktop developer cannot forget that their software will not be running in an ideal environment, and that just because it works great on the clean demo system, or the developer’s multi-CPU box with 4GB of memory does not mean that its performance is good enough when it hits the target user’s system.

Premature Optimization may not be premature…

There is an interesting thing I am noticing with younger developers – anytime someone mentions optimization, the first words out of their mouths is something about how how the optimization is premature optimization, and is only going to cause more harm than good.

These developers lack a certain wisdom that comes with years of varied experience – once you have experienced something that inefficient, you know how to spot it in the future.

Optimization is about simplicity. Think about it – whenever something is considered optimal, it is usually simplified somewhat from its original incarnation. An optimized interface is usually a simplified one. Optimized code usually takes less steps to do something, and thus is usually less complex; hence – simplified.

From the first Computer Science (or programming) class, the KISS philosophy is hammered in. Keep It Simple, Stupid. The art of optimization is the ultimate application of the KISS philosophy.

Never underestimate or disregard the benefit of simplification, of which is nothing but a better word for optimization. Simple is easier to use, understand and modify in the future. What could possibly be wrong with that?

Best WTF Moments – Correcting the Test’s Answers

In talking with a friend I was reminded of one of my favorite WTF moments – correcting the answers on an interviewer’s tech questions. Once, while taking an interview for a position, the interviewer was going over my answers for the tech questions and happened to mention that one (or two?) of them were incorrect.

When I asked about them it turns out to have been questions regarding the size of C++ objects that have no data members. For example:

class CEmpty { };
class CEmpty2 { public:  void MyFunc( void ) { return; } }; 
class CEmpty3 { public: virtual void MyFunc( void ) { return; } };

So what is the size of CEmpty, CEmpty2 and CEmpty3? My answer was that it was basically implementation defined. The interviewer’s answers said that CEmpty and CEmpty2 had a size of zero, and that CEmpty3 had a non-zero size.

I had answered that CEmpty and CEmpty2 will have a implementation-specific/defined size (IME, a size of 1) and that CEmpty3 will have a size due to the vtbl that will be added to the class, and added that the size of the vtbl will not be added to the implementation-specific/defined size given to otherwise empty objects. (In other words, if the size of the vtbl pointer is 4 bytes, the object size will be 4, not 5.)

The interviewer, being a/the senior developer that I would have been working with or for, did not agree and we ended up with some code snippets being compiled and executed in the VC++ 6.0 IDE. Wanna take a guess who was correct?

It turns out that the company’s CTO decided not to accept me for the position. I was never told why (I otherwise aced the interview, of course), but I was told that the CTO created the interview questions (and answers). Go figure…!

More Stupid Code…

Here is another example of code that demonstrates a complete misunderstanding of how things work, or at least of MFC and/or the RTL…

char value[256];
::GetPrivateProfileString("section","ValueName", "OFF", value, 256, INI_PATH);
CString temp(value);
temp.MakeUpper();
if (temp != "OFF")
{ ... }

Now, I can understand the need to do a case-insensitive compare of an INI file value.  But we have functions designed to do that!  Never heard of stricmp(…) and its variants?  OK – even if you do not know about the available RTL functions and all you know is CString, never heard of CString::CompareNoCase(…)?

Code like this just demonstrates ignorance, plain and simple.  Oh, and how goes that exception handling for situations where CString fails to allocate memory?  Oh, yeah…  THERE IS NONE!

Yet another real-world example of useless allocation.

NULL != NUL

I continue to find it rather amusing that even (so-called) experienced developers will use fundamentally different concepts interchangeably, even after doing this for so long.

For example, I have seen documentation by developers that mention nul values in a database table, or worse yet, NULL-terminated strings, and NUL pointers.

Now, some that simply miss the point will be saying something like: “In C++, NULL is zero, and the NUL ASCII code has a value of zero, so they are the same thing!”

Wrong.
Continue reading NULL != NUL

C-Hash, C-Pound, C-WHAT?

A recent discussion raised the fact that C#, often called “See Sharp” is technically incorrect because the symbol used is not the musical symbol sharp, but is the octothorpe, also known as Pound, Hash, Splat, Number Sign, etc.

An example of the differences is shown here (provided your browser and current font set support it).  The character above the 3 key on most US keyboards is this: # – but the musical sharp symbol is this: .  See the difference?  Simply, one has angled vertical lines (“pound”), but the other has angled horizontal lines (“sharp”).

Continue reading C-Hash, C-Pound, C-WHAT?

Educated Opinion .vs. Knee-Jerk Reaction

Ever make a comment regarding the performance of applications written for the .NET platform, or about Java’s performance and/or, God forbid, the current system requirements of most large-scale J2EE implementations, only to have some dumbass jump on you about how your comment is just the typical knee-jerk, “C/C++ elitist” reaction to the word “Java” (or “C#”, or whatever), never stopping to think that you might actually have an informed opinion about it based on actual experience?

And funny how they never consider their reaction to be a knee-jerk reaction to your comment?

Dumbasses…

$0.02 to buy a clue: Remember “HotSpot”? Do you think it was created because Java was already fast enough, or because there was an actual performance problem it was supposed to address? Think about it…

To Design or Not To Design?

One of the things I have noticed is that the ideals of design are often misused or misapplied.  More often than not (in my experience), this happens more often with developers that have more academic experience than real-world experience.

Nothing is worse than a recent CS graduate with a masters degree that has less than 2 years real-world experience creating a 23-page detailed design document for something as simple as a word-counting application.

Schools may teach software design, but they may fail to teach when it is best to forgo the design and start coding, letting let the design evolve naturally from the code, and not the other way around.

When designing first, inexperienced developers are more likely to over-engineer a solution.  They may see entire objects with rich hierarchies where a simple basic-type variable would work fine.  Only with experience does one realize to spot the situations where spending too much time on design is detrimental to a software project.

When I was younger, I learned: Think First, Code Later.  However, wisdom is what caused me to realize that thinking before coding is not necessarily the same as designing before coding.