My Old Friend fdebug

I suppose all programmers have code that they have carried with them for a long time. I have a bit of code that I have been developing for decades, from nearly the first programs I wrote.

I don’t remember where I got the idea to make this kind of library, but I remember when things started to look like the current version. I ordered a program called OpenDoors from a programmer named Brian Pirie in Canada. I hope that’s his name…this was a long time ago.  To wit, the purpose of OpenDoors was to abstract the modem/BBS interface so you could write software to work with old school bulletin boards. Definitely pre-everyone-has-it Internet.

Anyway, to work with OpenDoors I couldn’t really have much debugging output on screen, since I had it filled up with all of my awesome ASCII art.  So I took some code that I had been using while learning Pascal and C that wrote printf-formatted text to files for debugging and adapted it to OpenDoors.  Now I had code that would write to a file given print statements with a macro called dpr(), and it would even transmit that over the modem! An even nicer touch I had was that this macro was only a print statement if I had a #define DEBUG in the file it appeared in.  Otherwise, it was a C-style line comment. This meant that I could completely remove the printing from the runtime version of the code by taking out #define DEBUG in whatever files I wanted.

Over the years, this facility evolved, mostly in my professional applications in my old job. I added capitalized versions that would print regardless of the #defines around, versions that would print out the line number, file, and time, and a version that had logging levels (plus bunches of other neat features). The log levels were nice to have too; you could take out the printing at only the cost of an if comparison to the current log level.

But all of these previous versions started as Pascal, then evolved to C, and eventually evolved to use C++.  I spent a lot of today making a Java version.  Unfortunately Java doesn’t really have conditional compiling as far as I can tell, but I want to find a way to easily take the printing out (I have some ideas).  I always thought of that as the nicest feature of the fdebug library.  For Java I think I will have to be satisfied with log levels.

I went ahead and named the package project Logbook, but I didn’t really want to. I always wanted to keep the name of the library fdebug (that’s file debugger, btw) but for various reasons we changed it to logbook later. What a boring name…no abbreviations or hidden meanings or anything.

Java has a lot of conventions about capitalization/camel case, and I thought Fdebug looked silly.  So Logbook it is. Oh, fdebug, we hardly knew ye.

I should mention that I basically recreated Logbook from scratch in Java in about 2 hours. I was a little ashamed, or scared maybe? Eclipse can write a lot of code for you, which seems to work a lot better than the last time I did anything significant with Eclipse. It is really awesome to have the compiler warn you that you forgot to handle a particular exception, but to make the Twinkie completely fried in luscious batter the editor will let you surround the offending area with a try/catch in one action.  That’s neat, and it works better than something like Visual Assist X.  That’s a great program, but C++ keeps it from doing the awesome things that Eclipse can do when you are using it for Java.

Completely off topic:  I have never loaded up a game that is so playable, and yet so rife with bugs as Dead Island. Generally when a game has this many problems there are intrinsic flaws in the design that make it all around a stinky affair. But Dead Island is fun, and it is double, triple, or quadruple fun in multiplayer. If you can get the program working…and then if it is working, there are lots of little weirdnesses and problems that can be frustrating.  I am glad people have kept with it, because the game is a really cool thing and the more people keep playing it the more fixes the developers will upload. I just hope that the rage out there keeps simmering until they can fix the game. But there are too many good things coming out in the next few months for me to keep my hope alive.

Whitespace, Shmitespace

I finally have my Sirius radio installed! I have an XM already installed here, and it is aimed perfectly; I get tons of satellite signal.  However, aiming the Sirius was a little problematic. The instructions say the satellite is in the northern midwest, so if you aim at North Dakota from wherever you are, your antenna has tons of aspect with the satellite transmitter. For whatever reason that method is not working. I wonder if since Sirius and XM merged things are working a bit differently?  I would think that I would know about any new satellites, but I can’t explain why my current aiming gets any signal at all.  My guess is that all of the houses around are causing problems, but I get a good view with the XM. Radio engineering is complicated.

Most of today was spent learning how Java handles XML.  It’s like every other XML parser out there (it is basically just Xerces, I think), and at this point it is old hat.  But I still need to write my little playground tests to see just how things work. Inevitably, there is a wrinkle!

For some reason the Java (I am using Java SE 1.7) XML Text class does not correctly report when a node is only whitespace.  I don’t understand the purists who insist that including all of the tabs, newlines, returns, and spaces should be there for all cases.  They are pretty much useless if you are just trying to grab data.  I understand needing them if you want to replicate a document or something, but every parser should be able to discard those nodes during parsing so that it reduces the amount of checking for these useless pieces when you are actually doing something with the XML.

I may be doing something wrong, but from looking at the docs I think everything is correct. In fact, for the DocumentBuilderFactory I set setIgnoringElementContentWhitespace to true and everything still had the various whitespace thingies.  From looking around the web it seems that this is a regression in Java SE 1.6, but I figured it would be fixed in 1.7? Guess not.

So I added a method to my parser class (as many people said had to be done):

/**
* Return whether a text node is whitespace only or not
* @param tn The text node to check
* @return true if the node is whitespace only, false if it has content other than whitespace
*/
public boolean IsNodeWhitespaceOnly(Text tn)
{
    // when it starts working again, return isElementContentWhitespace
    // now, have to iterate through the content

    String value=tn.getTextContent();

    for(int i=0;i<value.length();++i)
        {
            switch(value.charAt(i))
            {
            case '\n':
            case '\r':
            case '\t':
            case ' ':
                break;

            default:
                // non-whitespace found
                return false;
            }
        }

    // only whitespace found

    return true;
}

This works fine for testing Text nodes, but it is a bummer to have to incur this overhead after it could be easily set when initially parsing the data.  I hate to stay focused on this much more, but I have to rule out my Java ignorance. I need to make myself believe that Oracle is lazy and didn’t fix the bug in 1.7.  Maybe I can make a tape repeating that while I sleep.

Anyway, with the basics of XML figured out, next is writing the parser that stores the maze XML data in Maze class data structures.  Things are heating up!

Hierarchy of Dunces

XML is a powerful tool.  Having the ability to store data in a text file using a method that resembles the way you would do the same thing with data structures and containers in code is a big deal.

Or, at least I see it that way.  Before I heard of XML, every implementation of data storage or retrieval involved me writing up some code (hopefully cutting and pasting from some other similar program) and writing binary to the data files.  This inevitably includes writing some sort of viewer program, which takes even more time.  I am not one of those people who can look at a hex dump and see the blondes in the Matrix.

With XML technology, your text editor is your viewer.  And it’s quite simple to think in terms of XML.  I can visualize the database I need and the XML layout and support code is nearly already written in my mind, since the technique lends itself to hierarchies of data. Hierarchical databases are nearly all I use anymore, and that makes me wonder.

I tend to fall in that group of people who finds a new toy and proceeds to try to solve every problem with it.  Just like the old Maslow’s Hammer: “if all you have is a hammer, everything looks like a nail.”  I usually have lots of tools in addition to that hammer, but I use the hammer anyway.  Because it’s new, and it rules.

I think I am falling into that trap with XML.  I can’t really remember what I did for database design before I got the hang of XML.  Now everything I come up with is a big top node that contains a series of nodes that themselves have a bunch of nodes, ad nauseam. So the software that parses that collection of nodes looks just like the XML:  a map of the first tier, and then that tier is whatever container fits it best until I get to the bottom of the DOM tree and find that nugget of data I am looking for.

Are relational databases so far behind me? I think that I may need to step back sometime and think if XML is a good solution for everything (or perhaps anything).  I am probably permanently infected; as soon as I typed relational database I tried to think how I would write XML that could describe a relational database.  But those tend to be better stored in flat files, and I know a relational database can be described in XML, but relational databases are flat.  Describing a relational database with XML is like putting tighty whities on a horse.  Horses don’t need underwear, don’t like underwear, and they really stretch out the leg holes. How about a nice saddle?

The problem I am tackling is like this:

A Maze consists of Floors that each contain one or more Pieces, each of which consists of one or more Chunks, where a Chunk has various numbers of Rooms, Corridors, and Vaults.  So I lay this out mentally in XML like:

<maze>
    <floor>
         <piece>
             <chunk>
                 <corridor/>
                 <room/>
                 <vault/>
             </chunk>
         </piece>
     </floor>
</maze>

It seems logical, and I know what attributes I need for each type of element, what data structures I would parse those XML elements and attributes into, and what containers would give me the fastest search times for each element type given how I will be using the data.

But I worry when I see a hierarchy in my mind these days.  Am I just whacking the problem with my hammer?  Maybe there is a crowbar around here somewhere…

Global Ignore All The Files!

One thing about source control is that you can get a lot of cruft in your repository (sounds naughty).  Since I am (at least as far as using it for something important in a professional capacity) new to Subversion, I had to deal with that problem today.

A couple of deleted repositories later and I think I have it all tuned.  I found a nice page on StackOverflow that had a good list of file globs to add to Subversion’s global-ignore configuration item.  I had to add a couple of extensions that I have been using for years in my various endeavors (*.out as an example) and now I think my configuration covers C, C++, C# and Java’s worthless effluvia.

Well, the Java case is a bit sketchy.  It seems that ignoring the bin directory pretty much leaves Subversion to get only meaningful Java stuff during an import or a commit. Subversion still indulged in some weirdness though (probably because I did something weird).  I will have to keep an eye out for junkfiles.

I have experience with Visual SourceSafe and ClearCase in a work environment, and luckily I didn’t have to do much with their actual administration parts.  After a while you kind of get how versioning systems are supposed to work, and I think that starting with something as dead simple as SourceSafe and then moving to ClearCase was a good thing.  ClearCase is high octane and would have been too much for us as a team to deal with on day one in the blessed land of source control.  But having been in the ring with ClearCase (and lost many metaphorical battles) going to Subversion now has been relatively painless.

My favorite thing about Subversion is that it has encouraged me to use branches as they are intended, something that is a little obscured in ClearCase (at least the way we used it). I hope I have the comparisons correct:  a VOB is like a repository, and a view is like a working copy.  So working in a branch in Subversion is like having a view that you will eventually discard or merge into the main path.  We actively did not keep a lot of views around because of server storage space and the pain it took to make them (even though I pretty much used snapshots myself rather than dynamic views, which is nearly the same as a working copy with Subversion, except for the annoying read-only files).  With Subversion all of this is extremely streamlined and seemingly more integrated into Subversion’s design.  Getting a branch is a checkout away.

I guess ClearCase is meant for different (perhaps insane) things developers attempt, which is why dynamic views are the real shiny nubbins for that product.

So even though it is just me developing things for now, I am going to get into the habit of working in branches and merging to trunk like a good boy.  I always just worked in trunk during previous forays into Subversion, but now it is best practices, best practices, definitely best practices.  A little luck and I may need those practices.

I am rustier in Java than I thought, so I spent a little time going over some of the weirder parts of the language.  One of my favorite occasions this year was a lazy Saturday and Sunday playing Crawl with a video stream of Notch writing his Ludum Dare entry Prelude of the Chambered on the other monitor.  Yes, that is what passes for fun for programmers.

It was (nerdily) cool watching an expert rush through the development of a pretty impressive game for the 48 hours (or so) it took to write.  The most interesting thing was some of the weird stuff I saw him do, like this little bit of magic:

public static Sound loadSound(String fileName) {
    Sound sound = new Sound();
    try {
        AudioInputStream ais = AudioSystem.getAudioInputStream(Sound.class.getResource(fileName));
        Clip clip = AudioSystem.getClip();
        clip.open(ais);
        sound.clip = clip;
    } catch (Exception e) {
        System.out.println(e);
    }
    return sound;
}

private Clip clip;

public void play() {
    try {
        if (clip != null) {
            new Thread() {
                public void run() {
                    synchronized (clip) {
                        clip.stop();
                        clip.setFramePosition(0);
                        clip.start();
                        }
                    }
                }.start();
            }
        } catch (Exception e) {
            System.out.println(e);
        }
    }
}

This is part of his Sound object that plays the various effects for in-game events.  I hadn’t done anything with sound in Java before, and while this is simple, doing the same in C++ sure takes a lot more lines.  At least in Win32; creating a bunch of events and mutexes to synchronize a thread (and writing the thread code, not to mention the actual sound library stuff) makes for a lot more code.  The bit that Notch wrote here is pretty compact.  I watched him write this in live detail and the first few tries didn’t work, so I don’t know if he had the general idea in his mind (e.g., the sound routines from Minecraft!) and he hacked away at it or if he peeked at some old code to get a good method.  Is this real code from somewhere else? If this is all Java production-level code needs to do at-least-usable sound effects, I think that’s neat.

By the way, the most painful thing about watching Notch work was how there are no comments.  I know he was on a nuttily tight schedule, but he went fast enough that it was a little too easy for him to leave commenting behind.  I shouldn’t count this as a good example of his work, even though I can’t help it; I hope Minecraft is well commented and maintainable.  I have spent the last five years beating commenting habits into myself, so it is compulsive at this point (compulsive is right…I am still terrible at it, and I make some of the most worthless comments you can imagine, but I can’t help it.  It’s either this way or no way!).  I wanted to at least write something about what he was doing for the above routines.  Best practices, definitely best practices.

Another aside:  I am very happy with Eclipse’s current incarnations nowadays.  It’s been a while for me and Eclipse, and the maintainers have made it into an impressive development environment.  As is obvious, it really shines when writing Java.  Most of my previous Java work has been a DevStudio→javac→java-type workflow, so it was clunky with no real debugging (just messages to the console, really).  Now that I am taking on a much larger scale Java project, I am really grateful that something like Eclipse exists.  But I am sure I will find more to complain about with Eclipse in the coming months (I sure miss virtual space!).

Shelves ain’t Programming

I spent half of today finishing the prep for my development machine, which was mostly downloading and installing Eclipse and the Android SDK.  My previously-owned MacBook will follow later, since I am going to develop everything in straight Java first and then port the Java basecode to Android and iOS.

I considered going with one of the cross-platform toolkits, but most of these cost money, and it seems that they don’t deliver on all of their promises.  I have also looked at some of the bytecode cross-compilers like XMLVM and it just feels like too much trouble.  Add in that I believe I would not only learn more doing the ports, and I gave up on the whole single code-base thing.  This has certainly been my experience trying to develop cross platform; you end up having multiple teams tweaking the code to work on the different configurations.  Well, that was with C++…Java can be a different story.

The use of Java for my development projects is a major driver for me too.  It really is the closest thing to cross platform, at least from a browser standpoint (the aforementioned different story).  But I don’t want people bringing up a browser on iPhone to use my software.  I am holding out hope that iOS will support some form of Java directly instead of having to go with native development.  There are rumors…

The other half of the day was spent getting my Ubuntu server and my Shoutcast server out of my office.  The Shoutcast server is an old friend running Windows XP in a Shuttle breadbox form factor (much love).  I use it to distribute tunes around the house (mostly streaming with my phone using XiiaLive, an awesome Shoutcast app on Android).

Having both of these boxes in my office added to my normal development machine makes for a lot of heat.  So moving things to the IT room (well, it’s our junk room!) has made for a big temperature difference.  Although I may have to move them back if it gets too cold this winter!

The biggest part of the effort was putting together and installing the Ikea shelf and mounting the new 16-port TrendNet switch that is the new backbone of my network (nearly all Gigabit now, except I can’t bear to replace my trusty router). The shelf ended up going pretty fast…well as fast as an OCD engineer can go. The end result uses three supports and is super-stable, so I am quite happy with how it went.  It is pretty much perfect for putting the server PCs on.

The switch is a slightly different story.  I bought it assuming it had some sort of way to wall mount it, but this wasn’t the case.  Apparently this version is meant to be rack mounted, and so it had the little 1-inch square of screw holes for attaching a 1U bracket.  Well, this doesn’t work when you need to mount to an exposed stud.  I thought about buying something like a rackmount kit made by TrendNet, but I didn’t really want to wait.  And $15 (AntOnline isn’t Prime!) is a little more than I want to spend for a dumb bracket.

So I went out to Home Depot and got four cheap angle brackets.  I have tons of screws left over from PC builds, and I found a set of four that fit the little 1U holes.  A little bit of creative procedure (level, mark, drill, screw) and I got a pretty good result.  The switch is stuck to the wall like a barnacle and it looks alright.  But it’s in an unfinished room…the rough look is trendy for a junk space!

It ain't pretty, but it is born-again hard and ready to push bits.
It ain't pretty, but it is born-again hard and ready to push bits.

I think that’s the last of the infrastructure stuff, except for getting my Sirius radio installed (I need that coax adapter!).  Next step is writing the tool to make the maze layouts for Project Alpha. Real code ho!

Ubuntu, Why So Angry?

I assume that Linux in general is an obtuse, ornery, and cruel operating system.  I have been coddled by sweet, sweet GUIs in my old age, and wrestling with a command-line based OS is apparently almost beyond me nowadays.

And then there was icing on my Linux difficulties:  my main problem was hardware.  I read somewhere that when making a network-attached storage box using FreeNAS, it was best to put the OS on a compact flash so that you get the full usage of the hard drives.  I eventually discarded the FreeNAS idea because it is ironically kind of closed; you can’t really do anything except use it for a NAS, and I need a Subversion and Wiki server more than the storage.  Add in that FreeNAS is based on FreeBSD and I am IT-impaired at this point, and FreeNAS became a quite unattractive choice.

So I got the bright idea to make my company server based on Ubuntu server, but for some reason I was stuck on the compact flash boot drive idea.  That was not a smart choice.

Ubuntu does something weird with a small boot drive (mine was 4GB, although it is really crazy to me to think that 4GB is small).  I installed OpenSSH, LAMP, Printer Server, and Samba from the Ubuntu server CD and had relatively no problems.  Well, except when you keep reinstalling after breaking everything beyond my meager Linux-fu to fix.  I saw just about every weird thing you can imagine, from the bash script having an unknown left parenthesis to bad files to my ethernet port dying for some reason.  People complain about Windows; how about “Segmentation Fault” for an error whenever you ifup eth0?

The really crippling problem, though, was that the CF drive kept filling up.  I thought that all of the partitions Ubuntu created by default were large enough, but eventually an apt-get would fail due to lack of drive size.  I would do apt-get clean and that would appear to free some space, but not enough.  I think I was correctly moving MySQL to another drive and I know the Subversion repository was in the right place, and anyway, I hadn’t committed any data to either of those anyway.  Filling up 1GB is easy for an operating system, but it seems like Ubuntu’s installer would warn me that trying to install anything more than system defaults is not a good idea with a 4GB main drive.

Another pain is that I was compelled to install with the compact flash as the only drive connected to start. Otherwise, the CF came up as sdc, but the Ubuntu installer would write grub to sda.  Well, that hurt my OCD nodes to have booting happen on a drive that does not have the operating system.  Silly, but my machines do what I say, not the other way around!

Oh, and I basically have to have OpenSSH.  The first time Ubuntu server decides to put the monitor to sleep when the operating system is idle, the monitor won’t wake up forevermore.  I am using the onboard graphics for the motherboard (an MSI 880GMA-35) and from the cryptic googles I googled, it is something with the hardware.  But remote access through PuTTy is fine by me anyway.  This server will head to my IT closet now that it appears that I don’t need to perform more surgery.

I don’t really blame Ubuntu/Linux, but rather my inexperience with things at this level.  I am sure there is some startlingly insane sys admin out there who could make my (previous) setup work, but I am only insane after messing with Ubuntu.  Hm, maybe I should try again…

So after two days, a spare 500GB hard drive pressed into service, and fighting an awful head cold I have 90% of my development infrastructure ready for war.  That metaphor seems apt so far!

Rough first day

Today is the first day of my new job as owner and operator of Zairon Engineering.  Things have been rough.  Or at least not going the way I want them to.

I have been fighting a cold, and last night it decided to kick in full force.  So I have been dealing with being sick all day, in addition to all of the tasks I had to do today.  I even postponed going to the city government to get the information for business licensing, just because I didn’t feel like it.

When I woke up, my priority was to get my Sirius radio connected here at the home office. I thought I had the SMB female-to-F male adapter that I need to connect the Sirius antenna to the coaxial cable running from the exterior of the house but I only have the other kind (and three of those to boot).  So I have to track down one of those, and they are hard to find so I am stuck waiting until I get it delivered from somewhere.

After tabling that, I started back with my Ubuntu server, which I am going to be using for Subversion, wiki, and general file storage.  I needed to add a pair of 3TB disks in RAID1 for the general file storage (I have a pair of 2TB drives in RAID1 for Subversion/wiki already).  Plus, the on-board Ethernet for the motherboard doesn’t support Wake-on-LAN, so I was going to add a 1Gbit card I had at my old job to try to get that going.

I managed to zap my original Ubuntu server installation by accidentally pairing my compact flash (where Ubuntu was installed) with one of the 3TB drives and blasting the files when the RAID controller created the logical volume.  I was stupid to do it, but I have some reasons:  originally the CF was on SATA5 and my DVD drive was on SATA6.  I hooked the 3TB drives up to SATA1 and SATA2, but then Ubuntu didn’t like that.  I switched everything around, and then recreated the logical volume.  Well, the first time with the 3TB drives on SATA1 and SATA2, those drives were first; now the selections were CF and then the 3TB drives.  So I blindly chose the first two, and didn’t pay attention.  Well, now I will.

So I am reinstalling everything, including making a certificate for Apache and all that IT admin stuff.  I was hoping to get some quick prototyping on my maze engine started today, but I am guessing that I flushed the chances of that.

But I am going to be optimistic!  I am just getting all of the stupid out before things get going.