About Me

A little info about myself.

I am by no means a PowerShell[1] power user but occasionally I tinker when I have a need. Recently I was doing some cleanup of my audio directories and spent some time finding and removing duplicates MP3s.  After doing so, the thought occurred that I may now have numerous empty directories that ought to be cleaned up.  So I set out to find a quick PowerShell script to do the job.  After a few minutes of searching I didn’t find anything that did exactly what I wanted.  I did find an old article that showed how to do in in a batch file[2].  But I don’t like running scripts that delete stuff from my computer if I don’t understand every little switch and command (which I didn’t on this page, and didn’t care to).  Furthermore I wanted to do it in PowerShell.  The biggest unknown to me was the simplest way to find all the empty folder.  If I could get that, then the rest should be easy.  I quickly found a technet tip[3] that showed a simple way to do that.  The rest was pretty simple from there.  It’s quick and dirty, but did the job.

I am doing this in two steps.  Many of you PowerShell gurus will probably judge me for not doing it in one.  But I like to break things up into simple logical steps for clarity.

The first step was to enumerate all of the folder object from the desired directory into an array.

$allFolders = Get-ChildItem C:\targetfolder -recurse | Where-Object {$_.PSIsContainer -eq $True}

From there is was a simple matter of using a ForEach and  and an If statement to interate through each object and delete it if it was empty

ForEach ($folder in $allFolders){if ($folder.GetFiles().Count -eq-and $folder.GetDirectories().Count –eq 0){rmdir $folder.FullName}}

Now there is one major flaw with this approach.  Since it is looking at a single directory at a time, you can’t delete an entire chain of empty directories with a single pass.  This only deletes folders that don’t have a file or folder in them (the deepest folder in a chain)  This is because this script does not know if the sub-folder has a file or not until it gets to it.  You may have to run these several times to get all the directories deleted.  Of course  we could write more logic around this to loop the “delete loop” until there is nothing left to delete.  But this was sufficient for my needs (it only took two passes for me).

Hope it helps.

OfficeAs mentioned in a previous post[1], I recently had to set up a new KMS host for our company in order to support KMS activation for the latest Windows OSs.  Since we started from scratch with a new server, I also wanted to make sure that the Office KMS activation for both Office 2010 and Office 2013 were working as well from the same server (which was how the old KMS host was configured).  However the behavior of the Office KMS activation was not as well documented as it is was for OS activation.  As you may already know, Microsoft Operating Systems have a hierarchical nature to them[2].  You choose the ‘highest’ OS in that hierarchy that you are licensed for and apply that KMS key to the KMS host.  Once done, that OS and all ‘lower’ (as well as older) OS’s activate as well.  Fairly simple and only one key is needed to activate your entire organization.

But there was no clear cut documentation on whether or not that was the case with the Office activations or not.  But before we get into that any further, let’s first talk about what we did know.

The default KMS licensing service is not aware of any Office KMS keys so in order to install one you must first install a ‘hotfix’ that makes the service aware of the Office products.  There are currently two available: the Microsoft Office 2010 KMS Host License Pack[3] and the Microsoft Office 2013 Volume License Pack[4].  On our old server we had installed the 2010 license pack when 2010 was released.  Then, a few years later, we installed the 2013 pack.

Now back to the issue at hand.  The documentation does not mentioned anything about the relationship between these two products.  Are they independent of each other or are, like the OS KMS keys, hierarchical by nature?  I made the assumption that they were hierarchical and simply applied the 2013 pack.  However, my assumption proved wrong.  Office 2013 was activating just fine but Office 2010 was not.  The obvious solution at this point was to simply apply the 2010 pack and move on.  However, the fact that I was in an untested and undocumented scenario (office 2013 pack being installed before the 2010 pack) made me hesitant to just test that theory on our production server.  We don’t really have test or dev instances of KMS Hosts either because of the limited number of host activations we get with each key (as well as the complicated process of waiting for the minimum client counts before the host starts to actually work).

So I decided to take the hard road and actually ask Microsoft for the proper documentation on the proper way to set up your server to activate not only all of our licensed OS’s but also both version of Office.  As many of you may have experience, it was a long and painful road and in the end I never really got a clear answer from them.  Although my favorite response was a voice mail I got from the Microsoft Volume Licensing team which said

“both products can coexist on the same server. However there may, and we stress the word may, be some application issues.  We recommend one product version per server.  But hypothetically you could run both”

This, to me, was basically a non-answer and provided no help.  But a few days later I received a phone call with the same group and had an actual conversation.  The most vital piece of information I received from that conversation was the fact that the office team treats these products independently and that there was no hierarchical nature intended.  To me that confirmed that I did indeed need to install both key packs and the they ‘should’ not conflict with each other.  With that information I went a head and installed the Office 2010 key pack and, much to my relief, everything worked fine.  Our new KMS host now activates all OS’s as well as Office 2010 and Office 2013.

My only hope is that Microsoft learned something from this as well and will update their incomplete documentation.

Many years back my wife and I made the decision to cut the cord from the major cable companies and rely on the ‘a la carte’ model exclusively for our media needs.  For the most part it has been without regret and I love the fact that my kids have grown up in a world mostly free of commercials (they are confused and annoyed by them when they see them at friend’s home). But one of the main sacrifices we had to make (well mostly my wife) was not being able to view NBA games in our home (specifically the Spurs as my wife grew up in San Antonio).  Of course, I have been following the progress of the NBA League Pass as it has become more available to non-cable subscribers over the years.  So I decided it was time to give it a try and bought a subscription for my wife as a Christmas present this year.  As the title suggests, so far I am not impressed.

We bought the broadband package which allows us access to games on computers, tablets and several set top boxes (we have a Roku and were excited to see it on the list).  Why smart phones were explicitly excluded is beyond me (except to force you to buy an add-on).  Right out of the gate things were not looking good.  It wouldn’t work on our Roku (we need a newer model).  And the Android tablet app was not compatible with my Android tablet even though it meets the advertised requirement of Android 2.2 or higher.  After a brief interchange with support to find out why I was basically told the following:

“Not all devices are compatible.  We are trying to add new devices. But if you want to know if a device is supported, try to install it and the app store will tell you”.

So I guess if I want to buy a new tablet I have to run down to the local Best Buy to see if I can install the app before I buy it.  Not a great answer.

The next major disappointment was the blackouts.  Yes, we knew about them going into it.  But we didn’t realize how many games were blacked out.  In short, all the good games are blacked out (all nationally televised games as well as local games).  It looks like the NBA still considers this a supplemental cable package and not a full-fledged content package that can stand on its own.  Looks like the NBA is still being told what to do by the cable providers.

So I resigned to hooking my laptop up to our TV over an HDMI cable to watch our first game.  My wife is watching that game now (we wanted to watch one earlier but, surprise, it was blacked out).  The connection to the actual TV worked well.  However, in a world where YouTube, Hulu and Netflix (to name just the major players) are streaming HD plus content over the internet, the video quality of these games is obviously inferior.  Even my wife and kids (who are pretty forgiving of this stuff) have noticed and made comments.  I am pretty sure they are using some old Flash streaming codec (like VP6) for their video because the pixelation is extremely bad.  And there is all sorts of other compression artifacting going on as well.  It’s like watching flash videos from the early 2000’s.  They are probably saving money on licensing royalties by doing so.  But boy is it bad.

But the bad isn’t ending there.  I have never been a fan of getting commercials crammed down my throat after I have paid for premium content (love Netflix, hate Hulu Plus).  But even Hulu has figured out how to do it in a way to minimize annoyance and make it work.  The NBA has not.  My wife started her game during halftime.  No problem, we can just rewind it to the beginning and start there.  That’s the beauty of streaming video.  And it sounded great in theory.  But in practice it has not been going well.  We reset the stream to the beginning.  And it started playing just as intended. Great!  However, it only played for about a minute and then jumped right into commercials.  When the commercials finished it jumped back to the game, but not where we left of.  It started playing somewhere in the second quarter.  Now any sports fan knows that this is a cardinal sin in sports etiquette.  Seeing scores from a future portion of the game ruins the entire game leading up to that point.  So I tried to rush to my laptop to scrub it back to where we left off.  But before I can get there, it started playing more commercials.  Really?  And this time when they ended it jumped us back to half-time.  You have to be kidding.  You can’t watch a game like this.  So I left to vent my frustrations by writing this post.  My wife, being more patient that I am when it comes to the Spurs, stayed.  But every time I have gone down to check on her, it seems to be playing more commercials.  And it never seems to be able to remember where she left off.

And now for the grand finale, or the proverbial tip of the iceberg.  She just came up to announce to me that the game stopped streaming 2 minutes before the end with a not near as polite as they hoped it would sound message saying “thank you for watching”.  She immediately went to the computer to try to watch the last two minutes, of a game she referred to as a “nail biter”, only to be told she could not play that game because it was in the process of “being archived”.  Wow.  I am in shock.

Now as a technologist myself I am usually pretty understanding of companies trying to do cutting edge things and generally side with them when they have hiccups along the way.  But to me this feels more like gross negligence.  They are not trying to anything that has  not been solved years ago.

Get it together NBA.  Your product, in its current state, is not worth the price you are charging.  I am seriously considering trying to get my money back.  However my curious side wants to stick it out and see if it gets any better.  But one thing is certain, I will definitely not be recommending this service to anyone, anytime soon.  It’s sad, because the concept is brilliant.  More and more people are moving to the “a la carte” model.  I would love to see similar packages from other content providers as well (why do we have a middle man anymore?).  But they need to be 100% independent of cable providers and they just need to work (yes that’s one last rip on League Pass).



Anyone who has been paying attention to the RTM (Release to Manufacturer) story for Window 8.1 and Server 2012R2 knows that it has been rife with drama.  For those who missed it, here is the background in a nutshell.  Historically Software Assurance (SA), MSDN and TechNet customers have had access to the RTM bits almost as soon as the products were RTM’d.  However this time around Microsoft decided to change things and did not release the bits to us (I find myself among these customers at this time) when the RTM milestone was hit on August 27th.  We were told we would have to wait until the General Availability release on October 18th.

Apparently Microsoft was unable to predict the obvious backlash that was the result of this new policy.  So when the unexpected complaints started rolling in, they did the right thing and decided to release the bits early to the aforementioned groups.  This was great, with one exception.  Microsoft still was not going to release the software (and the requisite keys) to the SA customers though their standard Volume License Service Center (VLMC) site.  We were told (now you know specifically which group I am a part of) that we should just get our bits from our TechNet subscription (SA customers also get a TechNet subscription).  This was fine for testing but for those of us wanting to upgrade our KMS infrastructure (Licensing infrastructure for Servers and Enterprise version of Windows) and be ready for users by the October 18th GA date, we still needed our official KMS keys (which are provided by the VLMC site).

Well after some back and forth with various resellers and MS reps, it appears that our keys have finally shown up in VLMC site.  They are not in the obvious location so you need to dig deep to find them.  First off know that the RTM bits are still not available in the site and even if you “export all keys” from the “Download and Keys” tab you will still not find the keys there.  However if you navigate to the “Enrollment Details” page for your active enrollment and then click on the “Product Keys” sub-tab, you will find your MAK and KMS keys for both Windows 8.1 and Server 2012 R2.

This just leaves us one question to be answered.  I have heard rumors that I cannot apply these new KMS keys to the KMS server without first installing a patch to the server (currently running on Server 2008R2).  Yet I can find no official information about this patch.  Even our MS reps are at a loss on knowing anything more (yet they have heard the rumors as well).  MS, so far, seems to silent on this one.  Let’s hope they give us something soon.

Update: Our MS rep confirmed that the needed patch to update our KMS server would not be released until close to the GA date.  He suggested that if we opened a support ticket we could probably get early access to the patch.  I wasn’t a fan of that option.  Instead we simply installed a copy of the RTM Server 2012 R2 and applied our KMS key to that.  So far things are working well.  We have already hit our minimum counts for both server and client OS’s and should be migrating the server into production soon.  So after a bit of work and a lot of frustration, looks like we will be ready for GA in a few weeks.

Pi-ConcertoI was recently tasked with finding an inexpensive solution for some digital signage needs.  I spent some time looking at several open source options and finally settled on a project started by the Rensselaer Polytechnic Institute (RPI)[1] called Concerto[2].  I then spent some time to see if I could get it to work on a Raspberry Pi[3], which we did.  I should provide the disclaimer now that this solution is not for everyone.  It provides very basic signage and is fairly limited in what it can display.  The Concerto software is very simple yet flexible, but it meets our current needs.  For example, neither Concerto (v1) nor the Rapsberry Pi can handle video.  That may change soon since v2 of Concerto is looking promising and the Rapsberry Pi is being improved all of the time.  But for now this solution get the job done for us.

The Concerto site has some pretty good documentation on their feature set and how to get their system up and running.  So I will not be addressing the Concerto system itself here.  This article  intends to focus on what it takes to get the Rapsberry Pi working nicely with the Concerto system.

The Core Image

We started out with a Model B Raspberry Pi [4] and dropped a fresh version of Rasbian OS[5] on it.  You can easily copy the image to an SD card using the ‘dd’ command in a linux based OS or using Win32DiskImager[6] on Windows.

Once the image is in place, boot your Pi.  On first boot the Pi offers you a simple config screen to get you started (raspi-config).  At this point feel free to customize the Pi to meet your regional needs (configure_keyboard, change_locale, change_timezone, change_pass, etc.).  You will definitely want to change the ‘boot_behavior’ to start the desktop on boot.  We also ran the ‘expand_rootfs’ at this point in order to get the full use of the SD card.  You won’t need the space for this project, but why waste it.  If you plan on remote access, enable ssh.  We also set the ‘overclock’ to ‘High’.  We experienced SD card issues when overclocking higher than that.  And for the flat panels that we are using, we needed to disable ‘overscan’ as well.

After the initial settings were completed and we were logged into the Pi, we  changed the hostname.  This is simply done by changing the name listed in the hostname file.

sudo nano /etc/hostname

Before we continue, now might be a good time to also install a few packages that we will need later on.

sudo aptitude install X11-xserver-utils
sudo aptitude install unclutter

Power Saving

Screen Blanking

One of the goals I had was to configure the system to turn itself on and off on its own (mainly the screens). After some research, I found that I could enable and disable the HDMI port on the Pi through the command line.  This was sufficient for us since our screens go into a hibernation state when they lose their HDMI signal and can turn back on when they sense any signal over the HDMI.  So I wrote two scripts: one to power the screens off and the other power them back on.  I then scheduled these scripts using cron.

Tip: After making these scripts, don’t forget to make them executable.

PowerOff Script (screenoff.sh)

echo Screen Off
tvservice -o

PowerOn Script (screenon.sh)

echo Screen On
tvservice -p
chvt 6
chvt 7

Why run the chvt command?  Well there was a strange behavior that occurred when turning the HDMI port back on.  The screen would not wake up. Somehow the X Session was not re-connecting to the HDMI after it was re-enabled.  Changing the foreground virtual terminal  to 6 and then back to 7 seemed to be enough to reconnect the X Session back to the HDMI port.

Cron Jobs

00 07 * * 1-5 /home/pi/screenon.sh >> /home/pi/screen.log
30 18 * * 1-5 /home/pi/screenoff.sh >> /home/pi/screen.log

For those who are not familiar, the following command will put you into edit mode for root’s crontab file.

sudo crontab -e

Crtl-X to exit (make sure you save your changes).  These jobs turn the screens on at 7:00 am and back off at 6:30 pm, mon-friday (off over the weekends).

Auto Launch the Concerto Screen

Each Concerto screen is accessible through a unique URL based on the MAC address you enter for that screen (server side config).  Normally you would enter the actual MAC address so that each client can dynamically display their screen when they connect to the Concerto server.  With the Pi, we are manually configuring them to attach to a specific screen.  Since we are doing it manually, we fudged a little on the MAC address and configured it as ‘1’.  This made a simpler URL.

The next step was to configure the Pi to automatically boot to a full screen browser pointed at the Concerto screen URL.  We did some experimentation with both the Midori browser and the Chromium Browser.  Chromium had a little better HTML5 support.  But Midori seemed a bit more stable on the Pi.  They both crashed periodically but while the Midori process would crash and close, the Chromium process would crash and freeze.  We opted for Midori in the end.

To make this happen, we wrote a simple shell script which would launch Midori using a bash while loop.  This made recovery from crashes simpler as the while loop would restart Midori whenever it crashed.

Midori Launch Script (signage.sh)

while true; do midori -e Fullscreen -a http://myconcertoserver.com/screen/?mac=1;sleep 5s;done

Now we configure the OS to launch this script in place of the regular LXDE shell.  This is done by modifying the ‘autostart’ script found in ‘/etc/xdg/lxsession/LXDE’.  Comment out everything in there and add the following two lines


Remember the package we installed earlier called ‘unclutter’?  This binary simply hides the mouse cursor after a few seconds keeping it from cluttering up your signage.


At this point we are mostly done.  A little cleanup is all that is needed.  By default, the Raspberry Pi kicks in its own screen saving features after about ten minutes.  In order to disable this you need to modify the parameters passed to the X session when it is launched.  You do this by modifying the ‘xserver-command’ configuration in the /etc/lightdm/lighdm.conf file.

Modify it to look like this:

xserver-command=X -s 0 -dpms

Your Done!

You now have a Rapsberry Pi that, on boot, will launch directly into a fullscreen browser and display the Concerto screen of your choice (or any other web page for that matter).

Have Fun!


I have been using Windows 8 as the primary OS on my new laptop for about 4 weeks now.  And I have to be honest, it is really growing on me.  There are still little things here and there which bug me (usability with a mouse has taken an obvious back seat in some areas).  But overall, it is better.  I actually like the new start screen.  It’s like the start menu on steroids once you learn it.  Of course I am a fairly proficient shortcut key user so the UI stays out of my way most of the time.

I did try an experiment with Windows 8 and my wife.  I had her log into an out of the box install with no instructions whatsoever and have her try to “do stuff”.  Within a few minutes she was pretty frustrated (and she was just trying to browse the web).  Enough things have changed that she had a hard time finding simple things (like the clock and the back button in metro IE).  However after a quick two minute training session she was using it just fine and was possibly even starting to like it.  And after some simple modifications (like making Chrome the default browser) she was even happier.  So yes, this one is going to require some hand holding at first and there will be resistance.

But my real test for whether a new UI is better than the last is to try and return to the old one after using the new one for a while.  If I find myself missing the new UI, then I consider the changes a win.  A great example for me was the relatively new Office Ribbon.  There was a huge backlash when they made that change.  But for the most part, everyone I have talked to recently that has had to go back an older version of Office misses the ribbon.  Another example was the new start menu and task bar in Vista/Win7.  For anyone who learned to use the integrated search to launch apps and spent the time to personalize their taskbar, moving back to XP is very painful.

I am seeing some fairly similar sentiments already with Windows 8.  The more proficient I get in the new UI (yes, even on my dual display docking station) the harder it is to go back to Windows 7.  In fact I struggled with some driver issues at first (Dell finally released updated Win 8 drivers for some key hardware the day before the official Win8 launch).  But when weighing the pros and cons of dealing with the issues or going back to Win7, dealing with the driver issues won out.  And ever since the update it has been solid.


I do feel compelled to mention, as I would feel dishonest if I didn’t, that the metro apps still largely feel useless for those of us in a business environment (especially for those of us with dual screens).  The desktop is still a better environment for the type of productivity I need.  Metro will need to come a long way before it can replace the utility of the desktop for me.  But it’s fun to ‘play’ with.

So yes, the transition to the new paradigm is incomplete in this version of the OS and the gaps are apparent to those of us in the know.  But even with those gaps, I feel like this OS is an improvement and is moving in the right direction.  And I have not even tried it yet on its intended platform, their new touch devices.

casLogoRecently, I was given the assignment to help coordinate the single sign-on efforts between a new partner and our University.  Like many Universities, we use a single sign-on solution called CAS, or Central Authentication System.  CAS is an open source project sponsored by Jasig[1].

CAS supports an open standard called SAML [2] that allows us to provide both authentication and authorization information to our partners (and internally) without the need to share our user’s credentials.  Now I myself am not an expert in this area, but this assignment finally forced me to pay a little more attention to how our CAS infrastructure works, and more specifically to its implementation of SAML.

The first thing I wanted to know was what our SAML assertions (the user data exchanged through the protocol) looked like and whether they provided the right information to our partner.  Since CAS is a system that leverages HTTP, I figured it should be simple to make a few calls using a web browser and a simple tool–that allows me to make HTTP post requests–to manually create some calls that return my own personal SAML assertions back from CAS.

I began my experiment with my primary research mechanism, ask our experts.  However our experts (including our CAS engineers) had no clue how to make the simple web calls I needed.  How is that possible you ask?  Well they are all programmers and as programmers they rely on the nice Jasig libraries which, as designed, abstract away the necessary calls into the CAS system.

So what next?  Well Google, of course (or for the sake of fairness Bing too).  Google brought up several posts, albeit a bit old, about using the samlValidate endpoint and a tool called SoapUI to make the calls.  Unfortunately none of these instructions worked as they appeared to use older versions of CAS.

My searching did bring me to the Jasig wiki[3] which started to point me in the right direction. But it still fell flat when it came to providing me the complete picture to make my request.  CAS itself was not offering any help either as all of my requests were met with 500 errors (I did talk to our engineers about this since I eventually discovered that the endpoint was running just fine and the only issue was a malformed request).

At last, I finally broke down and went to my third method of discovery, writing and debugging code.  I downloaded the .Net client code provided by jasig[4] and was happy to find that there was a great demo app included.  After some debugging and html packet sniffing I finally came across the secret sauce to making the calls I needed.  Here is a quick tutorial on how I was finally able to do what I needed.

First you need to initiate a login into the CAS system and provide a return URL.  You can do this in any browser.  I gave a bogus URL so that the redirect back from CAS would fail and expose to me a vital piece of information.


After a successful login, the CAS system will redirect the browser back to the specified URL with something called a SAML assertion artifact.  Oddly enough, this simple call was the root of all my previous issues.  A typical (non-SAML) call into CAS uses the query string parameter “service” instead of “TARGET”.   Using that parameter you get back a service ticket instead of a SAML assertion artifact.  And as you can now expect, a service ticket does not work on the SAML endpoints.

The redirect came back looking something like this:


At this point, all that the originating system has received about the authenticated is this SAML artifact.  So now, in order to complete the process, a simple HTTP POST call back to the CAS system will return whatever information, about the user, that CAS deems necessary (usually at least a primary identifier).  To create the POST call, I like to use a tool called Fiddler[5].  It’s simple, yet very powerful.  Note that this call requires a SOAP body in addition to the properly formed URL.

The URL looks like this:


and the body looks like this:

<SOAP-ENV:Envelope xmlns:SOAP-ENV=”http://schemas.xmlsoap.org/soap/envelope/”>
<samlp:Request xmlns:samlp=”urn:oasis:names:tc:SAML:1.0:protocol”
MajorVersion=”1″ MinorVersion=”1″ RequestID=”_192.″

Notice the assertion artifact exists in both the URL and the SOAP body.  This is the only thing that differs in the body from request to request.  After making this call, you get back a SOAP envelope containing all the appropriate SAML assertions for the user in question.  At this point the originating systems has verified, through CAS, that the user did indeed know their credentials and was able to retrieve all of the relevant user data that CAS was configured to return (username, firstname, lastname, email, etc.).

Starting today, users can sign up for a preview of the new Outlook.com service from Microsoft.  This new service will replace their current free mail service, hotmail or live mail.  New users are presented with a few keys bullets upon signing up as to why its better:

  • Outlook is modern—you get a fresh, clean design that’s intuitive to use.
  • Outlook is connected—your conversations come to life with your friends’ photos, Tweets, and recent Facebook updates.
  • Outlook is productive—you get free Word, Excel, and PowerPoint web apps built in with 7 GB of free cloud storage.
  • Outlook is private—you’re in control of your data, and your personal conversations aren’t used for ads.

That last bullet looks like a direct hit at Google’s gmail service.  The interface is simple and seems to mirror the flat, almost washed out look, typical of their upcoming Windows 8 operating system, its suite of metro apps and the Office 2013 software products.

There is also a short marketing video upon logging in for the first time.

I think its worth checking out.  So head on over and grab your new outlook.com email address today.

Well it looks like we recently got another update to SkyDrive[1].  For me SkyDrive has been one of those services that has had limited usefulness(similar to DropBox until recently) .  I have used it for various things over the years.  But due to the difficulty in getting stuff into it,  it just wasn’t convenient.  Well, I guess that is only partially true.  When they opened up 5 gig of my Skydrive space to be used for Windows Live Mesh, I did use that space up pretty quick (because there was a convenient access method).

However with these new updates, it is starting to appeal to me once more.  They did drop the free space from 25 gig to 7 gig.  But for those of us who have been ‘loyal’ users for a long time, they have offered an amnesty program to allow us to request that we retain our free 25 gig[2].  So that’s not an issue for me.  They also add some reasonable options for upgrading your space if you need (50 gig for $25/year, etc.).

With this update, also came a windows application that allows you to access/sync your SkyDrive with your computer(Windows Vista, Windows 7 and Mac OS X Lion)[3].  This was a bit of shock since it was becoming apparent to me that Microsoft was not interested in making it easy to access and use this space.  Apparently that mind set has changed.  With this new app, the barrier to get stuff in and out of your SkyDrive has been lifted.  They also have a Windows Phone 7 app and an iOS app (now supporting iPad).  Sadly there is no Android app.

So how will SkyDrive become a more active part of my life?  Well I am not sure yet.  I am currently as huge user of Windows Live Mesh[4], which is an integral part of my backup solution.  And they don’t appear to be trying to merge these systems (in fact they appear to be separating them even more as my 5 gig of used space does not show up in my used count in my 25 gig of SkyDrive space).  They have created a page entitled “SkyDrive for Mesh Users”[5].  However this page seems more to me to be more of a migration guide and propaganda to get you to move off of mesh and not documentation on how to get them to work together.

There are a lot of collaboration feature built into SkyDrive.  The integrated office webapps makes in browser editing of office docs a breeze.  It is a much, much more elegant solution than Google docs when it comes to a document repository and collaboration environment.  I guess we’ll just have to see.  If its useful, I imagine I will see it starting to creep back into my life more and more as time passes.  If they would just add an Android app to their lineup, that may seal the deal for me.

Fingers crossed!