Cooking with Grease

I always thought that using the cookbook method was an odd approach to programming. The idea is to abstract some of the difficult things to do at the low level so you can get on with the making instead of reinventing string stuff for the tenth time or whatever. This is done on grand scale with things like DirectX and OpenGL or even more intensely with something like BlitzBasic. Who wants to deal with low-level graphics programming when you could be making your games?

The main drawback of programming like that to me was that you can end up not understanding anything about the topic you used the cookbook for. It makes weird problems you inevitably encounter nearly impossible to debug. But for some things, you can’t avoid that; I certainly don’t understand how OpenGL triangle strips end up making nice objects on screen, and I never will. But now I find myself with an entire language apparently devoted to cookbook programming for nearly everything. That is, Java.

At least to me it seems like cookbook programming. You could (and people do) approach C++ the same way, for instance, by using a thing like Boost or wxWidgets (especially cross-platform). But I always found the documentation to be either so hard to understand (in the case of Boost) or so chaotic (in the case of wxWidgets) that I ended up having to really look at the examples and classes I was using and really understand them to use them anyway. Don’t believe me? Take a look at one of the coolest things in Boost:  the multi-index container. Awesome piece of code, but it is hard to get the hang of. The examples don’t cover the things it is really useful for; the examples are mostly for things that other lists already do, or too simple. So you have to dive deep in the tests and the source code to find out what is going on. That’s not really cookbook-style to me; it just saves me the trouble of actually writing the code, but I still have to know what’s going on.

Well, Java may not have a multi-index going for it, but it has a ton of stuff. And the kicker is, it is basically all built in, especially if you use Eclipse (see red squiggly, hover over, import the missing bits). One of the big hurdles to Boost or any other C++ library is shoehorning it into your project and build workflows. It can be a real pain to do. Java is just…there. And Boost/wxWidgets/Poco/etc. are all missing big chunks because trying to do academic things mixed with GUI seems to be a no-no. This is compared to Java, which is a pretty complete solution.

So by having a ton of stuff, you can Google even weird problems and the solution is there. Just cut and paste, and you are cooking up a new program. This is definitely a good thing, just like Boost is; no need to reinvent the wheel. But I feel like I am missing out on the nuts and bolts of Java, which I guess is the idea. It is worrying, because I feel a bit like I am working without a net.

That is the big drawback to me so far: when something doesn’t function as expected or is just too limited. I always had the opportunity to dive into the other open source libraries if I needed to see exactly what weirdness is going on or see the lack of functionality, and often I could fix the problem and submit the bug or modification to the project for a change. Bureaucracy aside, the fix or enhancement would eventually get in, but I could patch things on my side for our next release (I only ever used licenses that allowed that for obvious reasons) and everything was good to go.

I don’t know of a parallel in Java. If something doesn’t do what I need, then I don’t have a way to add in a modification that will do it, and bugs mean you just have to wait. Today’s issue for me is how the java.awt.Color object works.

Basically, it lacks the ability to multiply the color contained therein by either a scale factor or another color. To do that you have to release the previous instance and make a new one, which in my case could mean a lot of references tossed to the garbage collector, and it is slow way to do such a simple thing for thousands of color nodes anyway. That means the object is useless for things like color shifted bitmaps or lightening/darkening the color. There is a function to brighten or darken the bitmap, but it is by a fixed factor; you can’t specify the scaling yourself.

This is a nuisance, because what I would like to do is declare my own bitmaps that are arrays of Color objects, and then write their RGBA values directly to a BufferedImage and then blit said image to the screen. Instead, I either have to use arrays of integers for my bitmaps or make my own flavor of Color and use that.

Well, I chose the second path, and for my own purposes named it FastColor. It didn’t take long to write it, but it seems inelegant. Java gives so much, but when you are out of luck the improvisation feels tacked on. You don’t have a net with Java, and I don’t know where I will go if something doesn’t exist, and I can’t find a way around it. I always had the option to rewrite things at low levels if I needed to. It makes me want to rein in my ambitions when working with Java. I suppose that working low level in C/C++ is what the JNI is for…but I usually only need to tweak something, not rewrite the whole thing!

Oh, and I made good chili this weekend. The analog type. From a real cookbook.

Back Up Your Stuff

I am ultra-paranoid about backups. I basically don’t trust them unless I make them myself, and I honestly don’t believe that they are really good even then. But luckily my silliness ends there; I let the computer test them for ZIP integrity and leave it at that.

But it is hard to stop the “silliness”. Those files are all that programmers have to show for a lot of hard work. I think that other professions have it slightly worse, though. Other creative types can see their work, but usually can’t easily make copies. Although nowadays photographers are pretty much just like programmers as far as backups go…welcome to hell (next up: film directors). At least those other products don’t disappear if a hard drive fails, or some scum walks off with your laptop. I guess it depends on how light your products end up being. Well, files are just a bunch of magnetized sequences on a fragile platter or ink burns in plastic or some other method. Pretty easy to steal and ridiculously fragile.

So, after the incident at Project Zomboid where they were ripped off by burglars, I felt terrible. This is basically one of the horror scenarios my paranoia harangues about, and I can’t imagine how those guys felt. It must have been excruciating.

Way back when I was in charge of such stuff, I made daily backups to another machine, weekly backups to tape cartridges with a monthly rotation. Milestones and releases got a tape cartridge sent to a government bunker somewhere and then way offsite. I basically covered office, local, and regional disasters for our artifacts, because I felt this is what the company would want. It wasn’t much work for something worth a lot of money, but no one told me to do this.

Of course, I was a developer; it is more logical to me for IT or infrastructure guys to take care of backups. In my experience so far it seems that backups only usually go as far as the creators of the project. Some of it is purely an ownership question:  most people don’t care how office mates take care of their stuff as long as it doesn’t impact their island of responsibility. And I also think it’s a kind of miscommunication; you warn bosses and administrators that the codebase needs that protection, but it becomes very low priority mentally because it seems incredible that someone wouldn’t take care of it. And no one really believes a disaster will happen until it does. Unless you are paranoid!

Now being on my own, I get to do all the jobs, and one thing I want to do is make sure I have a good backup system. I want to utilize a wiki and source control, and basically keep everything electronic, and to me that means:

  • Daily automated backups of the wiki and Subversion repositories on the server to a hard drive that is safely protected from fire and water damage.  The automated backup script keeps the last 14 days of backups on the drive.
  • Weekly compressed mirrors of the whole server to an alternating pair of external hard drives kept in fire- and water-proof conditions when not in use.
  • Monthly backups to a pair of DVDs of the wiki and Subversion repositories. One DVD is kept locally and the other is stored offsite in fire- and water-proof conditions.

I could probably do weekly backups with two sets of four DVDs and send one set offsite rather than doing one DVD, but that’s a bit far, even for me. Maybe if things really take off that will become the practice I follow.

Now to make something worth taking these kind of precautions.

Definite Lack of Genius

Today I spent some time working on playlists under iTunes.  It didn’t actually take too long; only a few albums (records? directories? whatever it is in these newfangled times) in my iTunes library were suitable for the occasion (a wedding reception) so I didn’t have to decide what to keep and what to leave out. Well, until I got to the naughty soul songs. But more on that some other day!

I did start trying out to have the typical “band show” progression for the playlist that I think most radio stations use as their algorithm for song choices in a day part.  I know on stage bands like to start out with a high-energy well known song, and then another high-energy song, but then the third cut is kind of off-topic: a way to change to the middle of the show. If you have enough of a repertoire then you pick an old song that is medium tempo, or you play a cover that fits that criterion. The middle of the show needs a bit of a lull, so you try out new material there. Then around the sixth or seventh song you play a big hit again but switch immediately back to new stuff (hey, maybe it will stick). The crucial bit is the 10th, 11th, and 12th songs; those are the ones that people are thinking about leaving on, so you play great songs with lots of energy. And save a big number for the encore. Radio stations do roughly the same thing scaled to a two or three hour span divided up into chunks rotating around commercial breaks.

So the algorithm is big, big, lull, big, lull, big, big ,big. But I was planning out six hours of music (two is the minimum for a dinner, and six to be safe in case the party keeps on keepin’ on). I just couldn’t try to figure out a good pattern for 12-14 songs an hour.  Maybe I should add commercial breaks?

So I tried out Apple’s Genius Mixes or Playlists, but it was too hard to get the lists just right because I think the genres for some of the songs in my library were a little off. It’s no good when Marvin Gaye suddenly is followed by a Slayer song. I didn’t know they were country.

I think it is interesting nowadays when I see companies like Zynga or Blizzard using psychological manipulation techniques that the music business have been using for years. The principles of gamification are the same as the ones when a DJ teases your favorite song right before the commercial break. String along the addict so that they will remain exposed to the product the maximum time. The longer you are exposed to the product, the more likelihood you will spend money. This is the only way a free service can make money, and radio has pretty much always been free (until Sirius and XM appeared). Radio had to learn all the lessons of time spent listening decades ago, but game designers are just now implementing those lessons in games for time spent playing. I wonder why the music business isn’t applying the same devilish cleverness they used in the past eight decades to their on-line businesses? They would rather just sue.

My Old Friend fdebug

I suppose all programmers have code that they have carried with them for a long time. I have a bit of code that I have been developing for decades, from nearly the first programs I wrote.

I don’t remember where I got the idea to make this kind of library, but I remember when things started to look like the current version. I ordered a program called OpenDoors from a programmer named Brian Pirie in Canada. I hope that’s his name…this was a long time ago.  To wit, the purpose of OpenDoors was to abstract the modem/BBS interface so you could write software to work with old school bulletin boards. Definitely pre-everyone-has-it Internet.

Anyway, to work with OpenDoors I couldn’t really have much debugging output on screen, since I had it filled up with all of my awesome ASCII art.  So I took some code that I had been using while learning Pascal and C that wrote printf-formatted text to files for debugging and adapted it to OpenDoors.  Now I had code that would write to a file given print statements with a macro called dpr(), and it would even transmit that over the modem! An even nicer touch I had was that this macro was only a print statement if I had a #define DEBUG in the file it appeared in.  Otherwise, it was a C-style line comment. This meant that I could completely remove the printing from the runtime version of the code by taking out #define DEBUG in whatever files I wanted.

Over the years, this facility evolved, mostly in my professional applications in my old job. I added capitalized versions that would print regardless of the #defines around, versions that would print out the line number, file, and time, and a version that had logging levels (plus bunches of other neat features). The log levels were nice to have too; you could take out the printing at only the cost of an if comparison to the current log level.

But all of these previous versions started as Pascal, then evolved to C, and eventually evolved to use C++.  I spent a lot of today making a Java version.  Unfortunately Java doesn’t really have conditional compiling as far as I can tell, but I want to find a way to easily take the printing out (I have some ideas).  I always thought of that as the nicest feature of the fdebug library.  For Java I think I will have to be satisfied with log levels.

I went ahead and named the package project Logbook, but I didn’t really want to. I always wanted to keep the name of the library fdebug (that’s file debugger, btw) but for various reasons we changed it to logbook later. What a boring name…no abbreviations or hidden meanings or anything.

Java has a lot of conventions about capitalization/camel case, and I thought Fdebug looked silly.  So Logbook it is. Oh, fdebug, we hardly knew ye.

I should mention that I basically recreated Logbook from scratch in Java in about 2 hours. I was a little ashamed, or scared maybe? Eclipse can write a lot of code for you, which seems to work a lot better than the last time I did anything significant with Eclipse. It is really awesome to have the compiler warn you that you forgot to handle a particular exception, but to make the Twinkie completely fried in luscious batter the editor will let you surround the offending area with a try/catch in one action.  That’s neat, and it works better than something like Visual Assist X.  That’s a great program, but C++ keeps it from doing the awesome things that Eclipse can do when you are using it for Java.

Completely off topic:  I have never loaded up a game that is so playable, and yet so rife with bugs as Dead Island. Generally when a game has this many problems there are intrinsic flaws in the design that make it all around a stinky affair. But Dead Island is fun, and it is double, triple, or quadruple fun in multiplayer. If you can get the program working…and then if it is working, there are lots of little weirdnesses and problems that can be frustrating.  I am glad people have kept with it, because the game is a really cool thing and the more people keep playing it the more fixes the developers will upload. I just hope that the rage out there keeps simmering until they can fix the game. But there are too many good things coming out in the next few months for me to keep my hope alive.

Whitespace, Shmitespace

I finally have my Sirius radio installed! I have an XM already installed here, and it is aimed perfectly; I get tons of satellite signal.  However, aiming the Sirius was a little problematic. The instructions say the satellite is in the northern midwest, so if you aim at North Dakota from wherever you are, your antenna has tons of aspect with the satellite transmitter. For whatever reason that method is not working. I wonder if since Sirius and XM merged things are working a bit differently?  I would think that I would know about any new satellites, but I can’t explain why my current aiming gets any signal at all.  My guess is that all of the houses around are causing problems, but I get a good view with the XM. Radio engineering is complicated.

Most of today was spent learning how Java handles XML.  It’s like every other XML parser out there (it is basically just Xerces, I think), and at this point it is old hat.  But I still need to write my little playground tests to see just how things work. Inevitably, there is a wrinkle!

For some reason the Java (I am using Java SE 1.7) XML Text class does not correctly report when a node is only whitespace.  I don’t understand the purists who insist that including all of the tabs, newlines, returns, and spaces should be there for all cases.  They are pretty much useless if you are just trying to grab data.  I understand needing them if you want to replicate a document or something, but every parser should be able to discard those nodes during parsing so that it reduces the amount of checking for these useless pieces when you are actually doing something with the XML.

I may be doing something wrong, but from looking at the docs I think everything is correct. In fact, for the DocumentBuilderFactory I set setIgnoringElementContentWhitespace to true and everything still had the various whitespace thingies.  From looking around the web it seems that this is a regression in Java SE 1.6, but I figured it would be fixed in 1.7? Guess not.

So I added a method to my parser class (as many people said had to be done):

/**
* Return whether a text node is whitespace only or not
* @param tn The text node to check
* @return true if the node is whitespace only, false if it has content other than whitespace
*/
public boolean IsNodeWhitespaceOnly(Text tn)
{
    // when it starts working again, return isElementContentWhitespace
    // now, have to iterate through the content

    String value=tn.getTextContent();

    for(int i=0;i<value.length();++i)
        {
            switch(value.charAt(i))
            {
            case '\n':
            case '\r':
            case '\t':
            case ' ':
                break;

            default:
                // non-whitespace found
                return false;
            }
        }

    // only whitespace found

    return true;
}

This works fine for testing Text nodes, but it is a bummer to have to incur this overhead after it could be easily set when initially parsing the data.  I hate to stay focused on this much more, but I have to rule out my Java ignorance. I need to make myself believe that Oracle is lazy and didn’t fix the bug in 1.7.  Maybe I can make a tape repeating that while I sleep.

Anyway, with the basics of XML figured out, next is writing the parser that stores the maze XML data in Maze class data structures.  Things are heating up!

Hierarchy of Dunces

XML is a powerful tool.  Having the ability to store data in a text file using a method that resembles the way you would do the same thing with data structures and containers in code is a big deal.

Or, at least I see it that way.  Before I heard of XML, every implementation of data storage or retrieval involved me writing up some code (hopefully cutting and pasting from some other similar program) and writing binary to the data files.  This inevitably includes writing some sort of viewer program, which takes even more time.  I am not one of those people who can look at a hex dump and see the blondes in the Matrix.

With XML technology, your text editor is your viewer.  And it’s quite simple to think in terms of XML.  I can visualize the database I need and the XML layout and support code is nearly already written in my mind, since the technique lends itself to hierarchies of data. Hierarchical databases are nearly all I use anymore, and that makes me wonder.

I tend to fall in that group of people who finds a new toy and proceeds to try to solve every problem with it.  Just like the old Maslow’s Hammer: “if all you have is a hammer, everything looks like a nail.”  I usually have lots of tools in addition to that hammer, but I use the hammer anyway.  Because it’s new, and it rules.

I think I am falling into that trap with XML.  I can’t really remember what I did for database design before I got the hang of XML.  Now everything I come up with is a big top node that contains a series of nodes that themselves have a bunch of nodes, ad nauseam. So the software that parses that collection of nodes looks just like the XML:  a map of the first tier, and then that tier is whatever container fits it best until I get to the bottom of the DOM tree and find that nugget of data I am looking for.

Are relational databases so far behind me? I think that I may need to step back sometime and think if XML is a good solution for everything (or perhaps anything).  I am probably permanently infected; as soon as I typed relational database I tried to think how I would write XML that could describe a relational database.  But those tend to be better stored in flat files, and I know a relational database can be described in XML, but relational databases are flat.  Describing a relational database with XML is like putting tighty whities on a horse.  Horses don’t need underwear, don’t like underwear, and they really stretch out the leg holes. How about a nice saddle?

The problem I am tackling is like this:

A Maze consists of Floors that each contain one or more Pieces, each of which consists of one or more Chunks, where a Chunk has various numbers of Rooms, Corridors, and Vaults.  So I lay this out mentally in XML like:

<maze>
    <floor>
         <piece>
             <chunk>
                 <corridor/>
                 <room/>
                 <vault/>
             </chunk>
         </piece>
     </floor>
</maze>

It seems logical, and I know what attributes I need for each type of element, what data structures I would parse those XML elements and attributes into, and what containers would give me the fastest search times for each element type given how I will be using the data.

But I worry when I see a hierarchy in my mind these days.  Am I just whacking the problem with my hammer?  Maybe there is a crowbar around here somewhere…

Global Ignore All The Files!

One thing about source control is that you can get a lot of cruft in your repository (sounds naughty).  Since I am (at least as far as using it for something important in a professional capacity) new to Subversion, I had to deal with that problem today.

A couple of deleted repositories later and I think I have it all tuned.  I found a nice page on StackOverflow that had a good list of file globs to add to Subversion’s global-ignore configuration item.  I had to add a couple of extensions that I have been using for years in my various endeavors (*.out as an example) and now I think my configuration covers C, C++, C# and Java’s worthless effluvia.

Well, the Java case is a bit sketchy.  It seems that ignoring the bin directory pretty much leaves Subversion to get only meaningful Java stuff during an import or a commit. Subversion still indulged in some weirdness though (probably because I did something weird).  I will have to keep an eye out for junkfiles.

I have experience with Visual SourceSafe and ClearCase in a work environment, and luckily I didn’t have to do much with their actual administration parts.  After a while you kind of get how versioning systems are supposed to work, and I think that starting with something as dead simple as SourceSafe and then moving to ClearCase was a good thing.  ClearCase is high octane and would have been too much for us as a team to deal with on day one in the blessed land of source control.  But having been in the ring with ClearCase (and lost many metaphorical battles) going to Subversion now has been relatively painless.

My favorite thing about Subversion is that it has encouraged me to use branches as they are intended, something that is a little obscured in ClearCase (at least the way we used it). I hope I have the comparisons correct:  a VOB is like a repository, and a view is like a working copy.  So working in a branch in Subversion is like having a view that you will eventually discard or merge into the main path.  We actively did not keep a lot of views around because of server storage space and the pain it took to make them (even though I pretty much used snapshots myself rather than dynamic views, which is nearly the same as a working copy with Subversion, except for the annoying read-only files).  With Subversion all of this is extremely streamlined and seemingly more integrated into Subversion’s design.  Getting a branch is a checkout away.

I guess ClearCase is meant for different (perhaps insane) things developers attempt, which is why dynamic views are the real shiny nubbins for that product.

So even though it is just me developing things for now, I am going to get into the habit of working in branches and merging to trunk like a good boy.  I always just worked in trunk during previous forays into Subversion, but now it is best practices, best practices, definitely best practices.  A little luck and I may need those practices.

I am rustier in Java than I thought, so I spent a little time going over some of the weirder parts of the language.  One of my favorite occasions this year was a lazy Saturday and Sunday playing Crawl with a video stream of Notch writing his Ludum Dare entry Prelude of the Chambered on the other monitor.  Yes, that is what passes for fun for programmers.

It was (nerdily) cool watching an expert rush through the development of a pretty impressive game for the 48 hours (or so) it took to write.  The most interesting thing was some of the weird stuff I saw him do, like this little bit of magic:

public static Sound loadSound(String fileName) {
    Sound sound = new Sound();
    try {
        AudioInputStream ais = AudioSystem.getAudioInputStream(Sound.class.getResource(fileName));
        Clip clip = AudioSystem.getClip();
        clip.open(ais);
        sound.clip = clip;
    } catch (Exception e) {
        System.out.println(e);
    }
    return sound;
}

private Clip clip;

public void play() {
    try {
        if (clip != null) {
            new Thread() {
                public void run() {
                    synchronized (clip) {
                        clip.stop();
                        clip.setFramePosition(0);
                        clip.start();
                        }
                    }
                }.start();
            }
        } catch (Exception e) {
            System.out.println(e);
        }
    }
}

This is part of his Sound object that plays the various effects for in-game events.  I hadn’t done anything with sound in Java before, and while this is simple, doing the same in C++ sure takes a lot more lines.  At least in Win32; creating a bunch of events and mutexes to synchronize a thread (and writing the thread code, not to mention the actual sound library stuff) makes for a lot more code.  The bit that Notch wrote here is pretty compact.  I watched him write this in live detail and the first few tries didn’t work, so I don’t know if he had the general idea in his mind (e.g., the sound routines from Minecraft!) and he hacked away at it or if he peeked at some old code to get a good method.  Is this real code from somewhere else? If this is all Java production-level code needs to do at-least-usable sound effects, I think that’s neat.

By the way, the most painful thing about watching Notch work was how there are no comments.  I know he was on a nuttily tight schedule, but he went fast enough that it was a little too easy for him to leave commenting behind.  I shouldn’t count this as a good example of his work, even though I can’t help it; I hope Minecraft is well commented and maintainable.  I have spent the last five years beating commenting habits into myself, so it is compulsive at this point (compulsive is right…I am still terrible at it, and I make some of the most worthless comments you can imagine, but I can’t help it.  It’s either this way or no way!).  I wanted to at least write something about what he was doing for the above routines.  Best practices, definitely best practices.

Another aside:  I am very happy with Eclipse’s current incarnations nowadays.  It’s been a while for me and Eclipse, and the maintainers have made it into an impressive development environment.  As is obvious, it really shines when writing Java.  Most of my previous Java work has been a DevStudio→javac→java-type workflow, so it was clunky with no real debugging (just messages to the console, really).  Now that I am taking on a much larger scale Java project, I am really grateful that something like Eclipse exists.  But I am sure I will find more to complain about with Eclipse in the coming months (I sure miss virtual space!).

Shelves ain’t Programming

I spent half of today finishing the prep for my development machine, which was mostly downloading and installing Eclipse and the Android SDK.  My previously-owned MacBook will follow later, since I am going to develop everything in straight Java first and then port the Java basecode to Android and iOS.

I considered going with one of the cross-platform toolkits, but most of these cost money, and it seems that they don’t deliver on all of their promises.  I have also looked at some of the bytecode cross-compilers like XMLVM and it just feels like too much trouble.  Add in that I believe I would not only learn more doing the ports, and I gave up on the whole single code-base thing.  This has certainly been my experience trying to develop cross platform; you end up having multiple teams tweaking the code to work on the different configurations.  Well, that was with C++…Java can be a different story.

The use of Java for my development projects is a major driver for me too.  It really is the closest thing to cross platform, at least from a browser standpoint (the aforementioned different story).  But I don’t want people bringing up a browser on iPhone to use my software.  I am holding out hope that iOS will support some form of Java directly instead of having to go with native development.  There are rumors…

The other half of the day was spent getting my Ubuntu server and my Shoutcast server out of my office.  The Shoutcast server is an old friend running Windows XP in a Shuttle breadbox form factor (much love).  I use it to distribute tunes around the house (mostly streaming with my phone using XiiaLive, an awesome Shoutcast app on Android).

Having both of these boxes in my office added to my normal development machine makes for a lot of heat.  So moving things to the IT room (well, it’s our junk room!) has made for a big temperature difference.  Although I may have to move them back if it gets too cold this winter!

The biggest part of the effort was putting together and installing the Ikea shelf and mounting the new 16-port TrendNet switch that is the new backbone of my network (nearly all Gigabit now, except I can’t bear to replace my trusty router). The shelf ended up going pretty fast…well as fast as an OCD engineer can go. The end result uses three supports and is super-stable, so I am quite happy with how it went.  It is pretty much perfect for putting the server PCs on.

The switch is a slightly different story.  I bought it assuming it had some sort of way to wall mount it, but this wasn’t the case.  Apparently this version is meant to be rack mounted, and so it had the little 1-inch square of screw holes for attaching a 1U bracket.  Well, this doesn’t work when you need to mount to an exposed stud.  I thought about buying something like a rackmount kit made by TrendNet, but I didn’t really want to wait.  And $15 (AntOnline isn’t Prime!) is a little more than I want to spend for a dumb bracket.

So I went out to Home Depot and got four cheap angle brackets.  I have tons of screws left over from PC builds, and I found a set of four that fit the little 1U holes.  A little bit of creative procedure (level, mark, drill, screw) and I got a pretty good result.  The switch is stuck to the wall like a barnacle and it looks alright.  But it’s in an unfinished room…the rough look is trendy for a junk space!

It ain't pretty, but it is born-again hard and ready to push bits.
It ain't pretty, but it is born-again hard and ready to push bits.

I think that’s the last of the infrastructure stuff, except for getting my Sirius radio installed (I need that coax adapter!).  Next step is writing the tool to make the maze layouts for Project Alpha. Real code ho!

Ubuntu, Why So Angry?

I assume that Linux in general is an obtuse, ornery, and cruel operating system.  I have been coddled by sweet, sweet GUIs in my old age, and wrestling with a command-line based OS is apparently almost beyond me nowadays.

And then there was icing on my Linux difficulties:  my main problem was hardware.  I read somewhere that when making a network-attached storage box using FreeNAS, it was best to put the OS on a compact flash so that you get the full usage of the hard drives.  I eventually discarded the FreeNAS idea because it is ironically kind of closed; you can’t really do anything except use it for a NAS, and I need a Subversion and Wiki server more than the storage.  Add in that FreeNAS is based on FreeBSD and I am IT-impaired at this point, and FreeNAS became a quite unattractive choice.

So I got the bright idea to make my company server based on Ubuntu server, but for some reason I was stuck on the compact flash boot drive idea.  That was not a smart choice.

Ubuntu does something weird with a small boot drive (mine was 4GB, although it is really crazy to me to think that 4GB is small).  I installed OpenSSH, LAMP, Printer Server, and Samba from the Ubuntu server CD and had relatively no problems.  Well, except when you keep reinstalling after breaking everything beyond my meager Linux-fu to fix.  I saw just about every weird thing you can imagine, from the bash script having an unknown left parenthesis to bad files to my ethernet port dying for some reason.  People complain about Windows; how about “Segmentation Fault” for an error whenever you ifup eth0?

The really crippling problem, though, was that the CF drive kept filling up.  I thought that all of the partitions Ubuntu created by default were large enough, but eventually an apt-get would fail due to lack of drive size.  I would do apt-get clean and that would appear to free some space, but not enough.  I think I was correctly moving MySQL to another drive and I know the Subversion repository was in the right place, and anyway, I hadn’t committed any data to either of those anyway.  Filling up 1GB is easy for an operating system, but it seems like Ubuntu’s installer would warn me that trying to install anything more than system defaults is not a good idea with a 4GB main drive.

Another pain is that I was compelled to install with the compact flash as the only drive connected to start. Otherwise, the CF came up as sdc, but the Ubuntu installer would write grub to sda.  Well, that hurt my OCD nodes to have booting happen on a drive that does not have the operating system.  Silly, but my machines do what I say, not the other way around!

Oh, and I basically have to have OpenSSH.  The first time Ubuntu server decides to put the monitor to sleep when the operating system is idle, the monitor won’t wake up forevermore.  I am using the onboard graphics for the motherboard (an MSI 880GMA-35) and from the cryptic googles I googled, it is something with the hardware.  But remote access through PuTTy is fine by me anyway.  This server will head to my IT closet now that it appears that I don’t need to perform more surgery.

I don’t really blame Ubuntu/Linux, but rather my inexperience with things at this level.  I am sure there is some startlingly insane sys admin out there who could make my (previous) setup work, but I am only insane after messing with Ubuntu.  Hm, maybe I should try again…

So after two days, a spare 500GB hard drive pressed into service, and fighting an awful head cold I have 90% of my development infrastructure ready for war.  That metaphor seems apt so far!

Rough first day

Today is the first day of my new job as owner and operator of Zairon Engineering.  Things have been rough.  Or at least not going the way I want them to.

I have been fighting a cold, and last night it decided to kick in full force.  So I have been dealing with being sick all day, in addition to all of the tasks I had to do today.  I even postponed going to the city government to get the information for business licensing, just because I didn’t feel like it.

When I woke up, my priority was to get my Sirius radio connected here at the home office. I thought I had the SMB female-to-F male adapter that I need to connect the Sirius antenna to the coaxial cable running from the exterior of the house but I only have the other kind (and three of those to boot).  So I have to track down one of those, and they are hard to find so I am stuck waiting until I get it delivered from somewhere.

After tabling that, I started back with my Ubuntu server, which I am going to be using for Subversion, wiki, and general file storage.  I needed to add a pair of 3TB disks in RAID1 for the general file storage (I have a pair of 2TB drives in RAID1 for Subversion/wiki already).  Plus, the on-board Ethernet for the motherboard doesn’t support Wake-on-LAN, so I was going to add a 1Gbit card I had at my old job to try to get that going.

I managed to zap my original Ubuntu server installation by accidentally pairing my compact flash (where Ubuntu was installed) with one of the 3TB drives and blasting the files when the RAID controller created the logical volume.  I was stupid to do it, but I have some reasons:  originally the CF was on SATA5 and my DVD drive was on SATA6.  I hooked the 3TB drives up to SATA1 and SATA2, but then Ubuntu didn’t like that.  I switched everything around, and then recreated the logical volume.  Well, the first time with the 3TB drives on SATA1 and SATA2, those drives were first; now the selections were CF and then the 3TB drives.  So I blindly chose the first two, and didn’t pay attention.  Well, now I will.

So I am reinstalling everything, including making a certificate for Apache and all that IT admin stuff.  I was hoping to get some quick prototyping on my maze engine started today, but I am guessing that I flushed the chances of that.

But I am going to be optimistic!  I am just getting all of the stupid out before things get going.