Back Up Your Stuff

I am ultra-paranoid about backups. I basically don’t trust them unless I make them myself, and I honestly don’t believe that they are really good even then. But luckily my silliness ends there; I let the computer test them for ZIP integrity and leave it at that.

But it is hard to stop the “silliness”. Those files are all that programmers have to show for a lot of hard work. I think that other professions have it slightly worse, though. Other creative types can see their work, but usually can’t easily make copies. Although nowadays photographers are pretty much just like programmers as far as backups go…welcome to hell (next up: film directors). At least those other products don’t disappear if a hard drive fails, or some scum walks off with your laptop. I guess it depends on how light your products end up being. Well, files are just a bunch of magnetized sequences on a fragile platter or ink burns in plastic or some other method. Pretty easy to steal and ridiculously fragile.

So, after the incident at Project Zomboid where they were ripped off by burglars, I felt terrible. This is basically one of the horror scenarios my paranoia harangues about, and I can’t imagine how those guys felt. It must have been excruciating.

Way back when I was in charge of such stuff, I made daily backups to another machine, weekly backups to tape cartridges with a monthly rotation. Milestones and releases got a tape cartridge sent to a government bunker somewhere and then way offsite. I basically covered office, local, and regional disasters for our artifacts, because I felt this is what the company would want. It wasn’t much work for something worth a lot of money, but no one told me to do this.

Of course, I was a developer; it is more logical to me for IT or infrastructure guys to take care of backups. In my experience so far it seems that backups only usually go as far as the creators of the project. Some of it is purely an ownership question:  most people don’t care how office mates take care of their stuff as long as it doesn’t impact their island of responsibility. And I also think it’s a kind of miscommunication; you warn bosses and administrators that the codebase needs that protection, but it becomes very low priority mentally because it seems incredible that someone wouldn’t take care of it. And no one really believes a disaster will happen until it does. Unless you are paranoid!

Now being on my own, I get to do all the jobs, and one thing I want to do is make sure I have a good backup system. I want to utilize a wiki and source control, and basically keep everything electronic, and to me that means:

  • Daily automated backups of the wiki and Subversion repositories on the server to a hard drive that is safely protected from fire and water damage.  The automated backup script keeps the last 14 days of backups on the drive.
  • Weekly compressed mirrors of the whole server to an alternating pair of external hard drives kept in fire- and water-proof conditions when not in use.
  • Monthly backups to a pair of DVDs of the wiki and Subversion repositories. One DVD is kept locally and the other is stored offsite in fire- and water-proof conditions.

I could probably do weekly backups with two sets of four DVDs and send one set offsite rather than doing one DVD, but that’s a bit far, even for me. Maybe if things really take off that will become the practice I follow.

Now to make something worth taking these kind of precautions.

Global Ignore All The Files!

One thing about source control is that you can get a lot of cruft in your repository (sounds naughty).  Since I am (at least as far as using it for something important in a professional capacity) new to Subversion, I had to deal with that problem today.

A couple of deleted repositories later and I think I have it all tuned.  I found a nice page on StackOverflow that had a good list of file globs to add to Subversion’s global-ignore configuration item.  I had to add a couple of extensions that I have been using for years in my various endeavors (*.out as an example) and now I think my configuration covers C, C++, C# and Java’s worthless effluvia.

Well, the Java case is a bit sketchy.  It seems that ignoring the bin directory pretty much leaves Subversion to get only meaningful Java stuff during an import or a commit. Subversion still indulged in some weirdness though (probably because I did something weird).  I will have to keep an eye out for junkfiles.

I have experience with Visual SourceSafe and ClearCase in a work environment, and luckily I didn’t have to do much with their actual administration parts.  After a while you kind of get how versioning systems are supposed to work, and I think that starting with something as dead simple as SourceSafe and then moving to ClearCase was a good thing.  ClearCase is high octane and would have been too much for us as a team to deal with on day one in the blessed land of source control.  But having been in the ring with ClearCase (and lost many metaphorical battles) going to Subversion now has been relatively painless.

My favorite thing about Subversion is that it has encouraged me to use branches as they are intended, something that is a little obscured in ClearCase (at least the way we used it). I hope I have the comparisons correct:  a VOB is like a repository, and a view is like a working copy.  So working in a branch in Subversion is like having a view that you will eventually discard or merge into the main path.  We actively did not keep a lot of views around because of server storage space and the pain it took to make them (even though I pretty much used snapshots myself rather than dynamic views, which is nearly the same as a working copy with Subversion, except for the annoying read-only files).  With Subversion all of this is extremely streamlined and seemingly more integrated into Subversion’s design.  Getting a branch is a checkout away.

I guess ClearCase is meant for different (perhaps insane) things developers attempt, which is why dynamic views are the real shiny nubbins for that product.

So even though it is just me developing things for now, I am going to get into the habit of working in branches and merging to trunk like a good boy.  I always just worked in trunk during previous forays into Subversion, but now it is best practices, best practices, definitely best practices.  A little luck and I may need those practices.

I am rustier in Java than I thought, so I spent a little time going over some of the weirder parts of the language.  One of my favorite occasions this year was a lazy Saturday and Sunday playing Crawl with a video stream of Notch writing his Ludum Dare entry Prelude of the Chambered on the other monitor.  Yes, that is what passes for fun for programmers.

It was (nerdily) cool watching an expert rush through the development of a pretty impressive game for the 48 hours (or so) it took to write.  The most interesting thing was some of the weird stuff I saw him do, like this little bit of magic:

public static Sound loadSound(String fileName) {
    Sound sound = new Sound();
    try {
        AudioInputStream ais = AudioSystem.getAudioInputStream(Sound.class.getResource(fileName));
        Clip clip = AudioSystem.getClip();
        clip.open(ais);
        sound.clip = clip;
    } catch (Exception e) {
        System.out.println(e);
    }
    return sound;
}

private Clip clip;

public void play() {
    try {
        if (clip != null) {
            new Thread() {
                public void run() {
                    synchronized (clip) {
                        clip.stop();
                        clip.setFramePosition(0);
                        clip.start();
                        }
                    }
                }.start();
            }
        } catch (Exception e) {
            System.out.println(e);
        }
    }
}

This is part of his Sound object that plays the various effects for in-game events.  I hadn’t done anything with sound in Java before, and while this is simple, doing the same in C++ sure takes a lot more lines.  At least in Win32; creating a bunch of events and mutexes to synchronize a thread (and writing the thread code, not to mention the actual sound library stuff) makes for a lot more code.  The bit that Notch wrote here is pretty compact.  I watched him write this in live detail and the first few tries didn’t work, so I don’t know if he had the general idea in his mind (e.g., the sound routines from Minecraft!) and he hacked away at it or if he peeked at some old code to get a good method.  Is this real code from somewhere else? If this is all Java production-level code needs to do at-least-usable sound effects, I think that’s neat.

By the way, the most painful thing about watching Notch work was how there are no comments.  I know he was on a nuttily tight schedule, but he went fast enough that it was a little too easy for him to leave commenting behind.  I shouldn’t count this as a good example of his work, even though I can’t help it; I hope Minecraft is well commented and maintainable.  I have spent the last five years beating commenting habits into myself, so it is compulsive at this point (compulsive is right…I am still terrible at it, and I make some of the most worthless comments you can imagine, but I can’t help it.  It’s either this way or no way!).  I wanted to at least write something about what he was doing for the above routines.  Best practices, definitely best practices.

Another aside:  I am very happy with Eclipse’s current incarnations nowadays.  It’s been a while for me and Eclipse, and the maintainers have made it into an impressive development environment.  As is obvious, it really shines when writing Java.  Most of my previous Java work has been a DevStudio→javac→java-type workflow, so it was clunky with no real debugging (just messages to the console, really).  Now that I am taking on a much larger scale Java project, I am really grateful that something like Eclipse exists.  But I am sure I will find more to complain about with Eclipse in the coming months (I sure miss virtual space!).

Shelves ain’t Programming

I spent half of today finishing the prep for my development machine, which was mostly downloading and installing Eclipse and the Android SDK.  My previously-owned MacBook will follow later, since I am going to develop everything in straight Java first and then port the Java basecode to Android and iOS.

I considered going with one of the cross-platform toolkits, but most of these cost money, and it seems that they don’t deliver on all of their promises.  I have also looked at some of the bytecode cross-compilers like XMLVM and it just feels like too much trouble.  Add in that I believe I would not only learn more doing the ports, and I gave up on the whole single code-base thing.  This has certainly been my experience trying to develop cross platform; you end up having multiple teams tweaking the code to work on the different configurations.  Well, that was with C++…Java can be a different story.

The use of Java for my development projects is a major driver for me too.  It really is the closest thing to cross platform, at least from a browser standpoint (the aforementioned different story).  But I don’t want people bringing up a browser on iPhone to use my software.  I am holding out hope that iOS will support some form of Java directly instead of having to go with native development.  There are rumors…

The other half of the day was spent getting my Ubuntu server and my Shoutcast server out of my office.  The Shoutcast server is an old friend running Windows XP in a Shuttle breadbox form factor (much love).  I use it to distribute tunes around the house (mostly streaming with my phone using XiiaLive, an awesome Shoutcast app on Android).

Having both of these boxes in my office added to my normal development machine makes for a lot of heat.  So moving things to the IT room (well, it’s our junk room!) has made for a big temperature difference.  Although I may have to move them back if it gets too cold this winter!

The biggest part of the effort was putting together and installing the Ikea shelf and mounting the new 16-port TrendNet switch that is the new backbone of my network (nearly all Gigabit now, except I can’t bear to replace my trusty router). The shelf ended up going pretty fast…well as fast as an OCD engineer can go. The end result uses three supports and is super-stable, so I am quite happy with how it went.  It is pretty much perfect for putting the server PCs on.

The switch is a slightly different story.  I bought it assuming it had some sort of way to wall mount it, but this wasn’t the case.  Apparently this version is meant to be rack mounted, and so it had the little 1-inch square of screw holes for attaching a 1U bracket.  Well, this doesn’t work when you need to mount to an exposed stud.  I thought about buying something like a rackmount kit made by TrendNet, but I didn’t really want to wait.  And $15 (AntOnline isn’t Prime!) is a little more than I want to spend for a dumb bracket.

So I went out to Home Depot and got four cheap angle brackets.  I have tons of screws left over from PC builds, and I found a set of four that fit the little 1U holes.  A little bit of creative procedure (level, mark, drill, screw) and I got a pretty good result.  The switch is stuck to the wall like a barnacle and it looks alright.  But it’s in an unfinished room…the rough look is trendy for a junk space!

It ain't pretty, but it is born-again hard and ready to push bits.
It ain't pretty, but it is born-again hard and ready to push bits.

I think that’s the last of the infrastructure stuff, except for getting my Sirius radio installed (I need that coax adapter!).  Next step is writing the tool to make the maze layouts for Project Alpha. Real code ho!

Ubuntu, Why So Angry?

I assume that Linux in general is an obtuse, ornery, and cruel operating system.  I have been coddled by sweet, sweet GUIs in my old age, and wrestling with a command-line based OS is apparently almost beyond me nowadays.

And then there was icing on my Linux difficulties:  my main problem was hardware.  I read somewhere that when making a network-attached storage box using FreeNAS, it was best to put the OS on a compact flash so that you get the full usage of the hard drives.  I eventually discarded the FreeNAS idea because it is ironically kind of closed; you can’t really do anything except use it for a NAS, and I need a Subversion and Wiki server more than the storage.  Add in that FreeNAS is based on FreeBSD and I am IT-impaired at this point, and FreeNAS became a quite unattractive choice.

So I got the bright idea to make my company server based on Ubuntu server, but for some reason I was stuck on the compact flash boot drive idea.  That was not a smart choice.

Ubuntu does something weird with a small boot drive (mine was 4GB, although it is really crazy to me to think that 4GB is small).  I installed OpenSSH, LAMP, Printer Server, and Samba from the Ubuntu server CD and had relatively no problems.  Well, except when you keep reinstalling after breaking everything beyond my meager Linux-fu to fix.  I saw just about every weird thing you can imagine, from the bash script having an unknown left parenthesis to bad files to my ethernet port dying for some reason.  People complain about Windows; how about “Segmentation Fault” for an error whenever you ifup eth0?

The really crippling problem, though, was that the CF drive kept filling up.  I thought that all of the partitions Ubuntu created by default were large enough, but eventually an apt-get would fail due to lack of drive size.  I would do apt-get clean and that would appear to free some space, but not enough.  I think I was correctly moving MySQL to another drive and I know the Subversion repository was in the right place, and anyway, I hadn’t committed any data to either of those anyway.  Filling up 1GB is easy for an operating system, but it seems like Ubuntu’s installer would warn me that trying to install anything more than system defaults is not a good idea with a 4GB main drive.

Another pain is that I was compelled to install with the compact flash as the only drive connected to start. Otherwise, the CF came up as sdc, but the Ubuntu installer would write grub to sda.  Well, that hurt my OCD nodes to have booting happen on a drive that does not have the operating system.  Silly, but my machines do what I say, not the other way around!

Oh, and I basically have to have OpenSSH.  The first time Ubuntu server decides to put the monitor to sleep when the operating system is idle, the monitor won’t wake up forevermore.  I am using the onboard graphics for the motherboard (an MSI 880GMA-35) and from the cryptic googles I googled, it is something with the hardware.  But remote access through PuTTy is fine by me anyway.  This server will head to my IT closet now that it appears that I don’t need to perform more surgery.

I don’t really blame Ubuntu/Linux, but rather my inexperience with things at this level.  I am sure there is some startlingly insane sys admin out there who could make my (previous) setup work, but I am only insane after messing with Ubuntu.  Hm, maybe I should try again…

So after two days, a spare 500GB hard drive pressed into service, and fighting an awful head cold I have 90% of my development infrastructure ready for war.  That metaphor seems apt so far!

Rough first day

Today is the first day of my new job as owner and operator of Zairon Engineering.  Things have been rough.  Or at least not going the way I want them to.

I have been fighting a cold, and last night it decided to kick in full force.  So I have been dealing with being sick all day, in addition to all of the tasks I had to do today.  I even postponed going to the city government to get the information for business licensing, just because I didn’t feel like it.

When I woke up, my priority was to get my Sirius radio connected here at the home office. I thought I had the SMB female-to-F male adapter that I need to connect the Sirius antenna to the coaxial cable running from the exterior of the house but I only have the other kind (and three of those to boot).  So I have to track down one of those, and they are hard to find so I am stuck waiting until I get it delivered from somewhere.

After tabling that, I started back with my Ubuntu server, which I am going to be using for Subversion, wiki, and general file storage.  I needed to add a pair of 3TB disks in RAID1 for the general file storage (I have a pair of 2TB drives in RAID1 for Subversion/wiki already).  Plus, the on-board Ethernet for the motherboard doesn’t support Wake-on-LAN, so I was going to add a 1Gbit card I had at my old job to try to get that going.

I managed to zap my original Ubuntu server installation by accidentally pairing my compact flash (where Ubuntu was installed) with one of the 3TB drives and blasting the files when the RAID controller created the logical volume.  I was stupid to do it, but I have some reasons:  originally the CF was on SATA5 and my DVD drive was on SATA6.  I hooked the 3TB drives up to SATA1 and SATA2, but then Ubuntu didn’t like that.  I switched everything around, and then recreated the logical volume.  Well, the first time with the 3TB drives on SATA1 and SATA2, those drives were first; now the selections were CF and then the 3TB drives.  So I blindly chose the first two, and didn’t pay attention.  Well, now I will.

So I am reinstalling everything, including making a certificate for Apache and all that IT admin stuff.  I was hoping to get some quick prototyping on my maze engine started today, but I am guessing that I flushed the chances of that.

But I am going to be optimistic!  I am just getting all of the stupid out before things get going.