Friday, December 12, 2014

ASP.NET "page has not been precompiled" after publishing


The file '/bla.aspx' has not been pre-compiled, and cannot be requested.


Yeah, thanks ASP.NET!  The site was most definitely pre-compiled.  This very un-useful error message has myriad causes, but for me, 8 wasted hours later, I discovered that at least this time, it was due to use of NTFS compression on my development computer.

Microsoft invented the NTFS compression scheme, so you'd think they knew how to make products that worked well with it..  Or at worst, give an error message clearly stating that use of NTFS compression is incompatible with whatever you're trying to do at the time.  But no, we get useless error messages like the above.

8 extra hours spent on a website deployment that should've only taken 30 to 60 minutes!  It was enough to have me seriously wondering whether I should join the band of happy Ruby On Rails developers!  But then I suspect they have their fair or unfair share of technology gripes too.

My main gripe is that this was what programming was like 20 years ago - extremely unhelpful error messages, things that should work but don't, and many hours and days of wasted time for no good reason.  It is the 21st century now.  We've made so much progress.  Programming is so much easier - by far - than it's ever been before.  But c'mon, we still have stupid problems like this?  Yes, I know, it's a major advance on how things were, but these vestiges of "the old way" have to go.

And as for the particular project I was working on : an extra full day of work for what should've been a simple deployment, might well have cost me a sale.  Not impressed!

If you've published your website using Microsoft Visual Studio .NET (VS.NET) and you chose the option to precompile, and yet are getting this "has not been pre-compiled" message on the server you deploy to, check to see whether any *.compiled files exist in the website's "bin" folder.  If not, then go back to your development machine, and ensure NTFS compression is disabled for the "obj" directory nested within the top-level website project directory - or whatever other location is hosting the files you're precompiling.

Here's hoping you don't lose 8 hours on this stupid bug!

And thanks to a commenter on this blog for helping me find the problem after many hours of searching.

Thursday, December 11, 2014

ASP.NET, symbolic links, and directory junctions

We like to put new versions of live ASP.NET websites in version-numbered folders.

Hitherto, that involved updating the IIS metabase so that the web site or web application "physical path" would point to the new location.

I thought it might be a little easier to use mklink to create a directory symbolic link or a directory junction with a name that I don't change (e.g. "curver" for "current version"), pointing to whatever is the current website version folder at the time.

It doesn't work.

For directory symbolic links, ASP.NET seems to totally fail to use them properly.  Instead of getting the output of each ASPX page when visiting it in a web browser, I get the ASPX source!

For directory junctions, ASP.NET seems to use them just fine, but at the point you update where the directory junction points to, it's hit & miss whether ASP.NET will notice the change or not.  If you're happy to trigger a restart of the website in IIS, then it works but if you're back to mucking around with IIS (i.e. to trigger the website restart), you might as well stick to updating the "physical path" setting of the website.

Conclusion?  It was worth a try - would've been nice if it worked, but it doesn't.

Wednesday, November 5, 2014

Beware: Android Studio ain't all you need

New dev VM, new client, new Android app to build.  Clean O/S install, so I'll just grab Android Studio beta and I'll have everything I need, right?

Would that it were so simple!

The Android Studio download page sure doesn't make it clear that it's just one of many parts to the full picture, and Google searching reveals little.

Most people, it seems, have installed Android Studio after installing the more traditional Eclipse+SDK combination, and in that context, Android Studio works very well.

But on a new machine, if you just install Android Studio, and try to run it, you get an error message stating that no JRE can be found.  Hmph.  Not all-in-one.  Fair enough, but they could've made a note of it in the download page.

So I download & install the Java JDK+JRE, and now Android Studio starts up & looks beautiful.  Yay!  Until I try to create a new project.  Then it complains that the Android SDK isn't found.

Again Google, I'm fine with you not making the Android Studio installer an all-in-one, but could you at least have made it obvious?  Mention on the Android Studio download page that you'll also need to install the Java JRE and the Android SDK!  Then we won't feel somewhat frustrated by having to waste time figuring how to do what was obvious to you, but not necessarily immediately obvious to every one of us.

And this is the general rule, all we developers can learn from, and a rule I see broken over and over again in computer systems of all shapes and sizes : a cumbersome user experience is ok, IF you explain the steps to the user!  But if you have a cumbersome process and DON'T explain the steps to the user, some users will give up, and most users will be frustrated even if they finally figure it out.  And Android Studio is far from the most egregious example.

So my advice?  New dev box, and you want minimal digital clutter but you also want to keep the installation process simple?  Install Java JDK, then install Android SDK+Eclipse bundle, then follow the Android SDK installation page notes about downloading the latest SDK updates, and finally install the Android Studio.  Certainly, Microsoft Visual Studio installation is very polished and absolutely shines in contrast, but hey, we do finally arrive at a working Android Studio, and whilst it did waste time and mental effort, we're there, so I'm moving on, and leaving this "here be dragons" for all ye subsequent travelers.

Tschus!

Monday, October 20, 2014

Inbox zero Gmail hack

UPDATE : Several major improvements on the following :
  1. Create a Gmail filter for an asterisk inside quotation marks - i.e. :

    "*"

    (but INCLUDE the quotation marks)

    The action for this filter should be to apply a label to new messages.  The label should be something like "aaa new" or "0new" or "0" or whatever you choose.
  2. Why the "aaa" or "0" or similar at the start of the label name?  If you're like me, you have dozens of labels, so much so that only a small set are visible in the default Gmail view.

    They are sorted alphabetically, and so putting "aaa" or "0" or punctuation at the start of the label name, means the label will handily appear at the top of the label list, always readily accessible.

    But do note that there is advantage to this label name being short, because the longer the label name, the less of each email's subject line is visible when skimming through newly received emails.  (Of course, if you view the newly received emails by clicking on the label, Gmail is smart enough to not additionally show that label on each email in the list, but I like to have the flexibility to efficiently use this system both from the dedicated label and from the Priority Inbox, which I love.)
  3. Finally, add a pleasant label color to this new label.
Voila!  Now all newly received emails have a little splash of color, alerting you to their arrival, and now you remove the "aaa new" label to indicate you've seen it.  You still get all the benefits of the approach described in the original article (below), with the added benefit that if you remove the "aaa new" label from a conversation, and then a new message arrives as part of that conversation, voila - the conversation is labeled "aaa new" again, which is exactly what we want!

---

I like the idea of looking at a nice, empty inbox, but in practice I find it almost impossible with the state of email management tools today.

Yes, I can tell you how email should work, and that would result in "inbox zero".

But say we're stuck in 2014 and Gmail is about the best we have, and we like the Priority Inbox and Non-Priority Inbox distinction and we want to leave emails showing as Unread until we actually get around to reading them, ...

... but we also want an easy way to sweep older emails out of the way so we can tell at a glance what has newly arrived?

e.g. once or several times daily batch reviewing of new emails.

My first thought was to make a cool custom Gmail plugin.  Nice, but obviously lots of work.  Very nice, but very lots of work.  I imagined having a divider you could drag up or down to mark the point below which you have reviewed emails and above which are emails you haven't yet seen.

Much simpler, of course, would be to make a Gmail label and have all new mail automatically get that label, and then on your daily or other occasional look in your Gmail, search for "label:inbox label:newarrival", or such like.  Once you've skimmed through the emails and are happy for them to disappear from your "new arrivals - not yet seen" list, you bulk select and remove the label.

Or, even simpler to set up, do what I did just now, and create a label called "seen".  As a once-off setup step, bulk select all thousands of emails cluttering your Inbox, and label them "seen".  Now, when you want to find out what emails are new arrivals, search for "label:inbox -label:seen".  Once you've skimmed through them, and left in the inbox anything you want to read later (hence you don't want to mark it read) or otherwise action later (hence you don't want to "Archive" it, which is the only way to remove it from the Inbox), you bulk select them and add the "seen" label to them.

Voila!  Now you have an email system that actually fits the way you use the system - "inbox" means "I plan or hope or would like to do something about this one day", "unread" means "I plan or hope or would like to read this one day", and "seen" means (drumroll) "I saw this email in the list and have determined that it should stay in the list, but I don't need to see it again when I next check for new emails".  And that leaves stars - those oft-abused little creatures - to actually be used for what we intuitively think a star should represent, which is something special, not something commonplace.  Read/unread, inboxed/archived, seen/unseen are commonplace distinctions, and stars have no business representing them.

And that's how I finally got the blissful peace of an uncluttered email "inbox" that works pretty well, and only took minutes to set up, and is trivially easy to maintain - so easy to maintain that I can even easily catch up again in future if I fall behind due to sickness or travel or whatever!  Nice!  Viva labels!

UPDATE : Of course, what this does not give me is notification of new emails added to previously-seen conversations still in my inbox.  That is a problem, although in my particular case, not too great a problem, because a) emails that are likely to be part of ongoing conversations are likely to end up in the Priority Inbox for me; and b) I like the Priority Inbox and still periodically look through it; and c) when a new email arrives for an existing conversation, that conversation jumps to the top of the Priority Inbox.  Altogether, this means I have a good chance of noticing that an older conversation has jumped atop the list.  (Most of the emails I receive go into Inbox not Priority Inbox.)

Wednesday, August 6, 2014

Issues transitioning ASP.NET website to Windows Server 2012 R2

Background :


Approx. 9 year old ASP.NET website.

Originally ASP.NET 2.0 (or maybe even 1.1?), and upgraded over time to now being ASP.NET 4.0.

Running fine on a Windows Server (2003?).

We decide to virtualize our servers.  The new virtual server is Windows Server 2012 R2.

I copy the ASP.NET files across and lose DAYS trying to get it working.

It was an unexpectedly painful process, with lots of obtuse error messages.

Here are a few of the "gotchas" that got me - and hopefully will help un-got you :

Bye bye DAO & 32-bit mode


Due to its age, the system was using DAO against a Microsoft Access database for part of the system (including integration with a third-party system).

DAO only has a 32-bit version.

For no apparent reason - and I've never had this problem on any other server - but when I put the application pool in 32-bit mode, I would get extremely strange errors I could not resolve.

How did I solve it?

Switch back to 64-bit mode, and modify the database connection string to use Microsoft's "ACE" library instead of DAO.  (Scroll halfway down this article for example connection strings.)

This also requires installing ACE on the server - not installed by default.  (Go here and download AccessDatabaseEngine_x64.exe)

No HTTP Handler For The Request Type 'GET'


This is up there with the weirdest problems I've seen.

Turns out - for no apparent reason - there can somehow end up being a deficiency in the configuration of ASP.NET.  Copying web.config.default over web.config in the .NET Framework config folder magically solved the problem - thanks to these guys for saving me a huge bunch of frustration and time wastage!

Unhandled exceptions only intermittently going in to the Event Log


I wasted a lot of time trying to find details of error messages at the very start of my testing.

The web page returned to me would say that an unhandled exception had occurred and an unhandled exception had occurred in the error handler.

Not very useful, but exception details would sometimes appear in the Event Log.

Here's the big problem : it was highly unreliable.

Sometimes the .NET exception dump would appear in an Event Viewer entry pretty quickly.

Other times it seemed to take minutes.

And other times the error messages never appeared in the log at all.

I could not determine the cause of the inconsistency.

Fortunately, at some point I sufficiently adjusted file permissions to enable my web application's error handler to write its own exception dumps to text files in a log folder, and then with the detailed error messages available immediately after each page hit, I was able to solve the problems much faster.

(The error handler was quite complex and was still triggering its own unhandled exception, but after dumping the primary exception's details to text file.)

SMTP services installation was not obvious to me


I found lots of articles explaining that to use the IIS SMTP server with IIS 7.5, you need to install various things.  I started using IIS so long ago and have done sufficiently little server administration that somewhere along the line I failed to notice that with the design of Windows Server these days I had to add the SMTP server as a feature not as a role.  Why IIS is a role whilst SMTP is not, I don't know and don't intend to bother investigating, but that explains why after quite some minutes poring carefully over the list of installable items, I never saw SMTP.  I had to click "Next" to go from the Roll selection page to the Feature selection page.  That one was kinda-obvious, but it tripped me up, so I mention it in case it helps anyone else.

And I had quite a lot of other pain during the transition process - a process that I had naively imagined in advance might only take an hour or two.  But those are the main ones that stick in my mind.

HTH!

Sunday, July 20, 2014

Using vmrun on Mac OS/X (VMware Fusion)

The skinny :

To use VMware Fusion's vmrun from the command line (i.e. Terminal), you must invoke it with its FULL PATH, even if you are already in the same directory as vmrun lives in.

The fat :

Continuing to prove that VMware is great when it works and quite lousy when it doesn't, get this :

I opened Terminal.

I cd'ed to /Applications/VMware\ Fusion.app\Contents\Library.

I tried to execute "vmrun".

"Command not found"

How can that be?!!  The command is definitely in that directory!

ls -l

Yup - there it is, vmrun, and all users have execute permission.

I lost probably half an hour on this stupid bug.

Turns out that the solution is simple, but horribly inobvious :

You must include the full path to vmrun every time you invoke it, even if you are already in the same directory as it!!!

So instead of bothering to cd to the enclosing directoy, just always use :

/Applications/VMware\ Fusion.app/Contents/Library/vmrun

Note : I tried adding the directory to my PATH to see if that way I could run vmrun just with "vmrun", but it didn't work.

Thursday, July 17, 2014

VMware Fusion Converter VSS issues

VMware is simultaneously both very nice and very frustrating.

I have a 4.5 year old laptop (Sony Vaio Z) running Windows 7 Ultimate, and decided to virtualize it.  Great idea!

VMware Fusion comes with a physical-to-virtual converter.  Piece of cake!

But every time I try, I get the big bad message :

"An Error Occurred

"The VSS snapshots cannot be stored because there is not enough space on the source volumes or because the source machine does not have any NTFS volumes.  Error code: 2147754783 (0x8004231F)."

Thanks VMware!

So here are some of the things I tried that have helped not a bit :
  • I spent a long time clearing stuff off the 147GB C: drive until there was over 10% of the drive free.  Still kept complaining.
  • I disabled my R: drive RAM disk.  It was an NTFS drive, but just in case somehow it was causing trouble, I completely turned off the RAM drive system.  (Note that I had P2V'ed a Windows 8.1 machine with a RAM disk successfully, using the same tool, so this never seemed likely to be the problem.)
  • I suspended BitLocker in case that was somehow interfering (although note that I had P2V'ed a Windows 8.1 machine with BitLocker successfully, using the same tool, so this also seemed unlikely to cause the problem).
  • I adjusted a MaxTokenSize registry setting.
  • I completely disabled "previous versions".
  • I fully enabled "previous versions" (both for my files and for operating system files - i.e. the highest level) and allocated it 10% of the available space.
  • I deleted all existing previous versions.  If you're doing the maths, this means there are roughly 14GB available and allocated to VSS.  So space shortage is not the problem here!
  • I confirmed that there are no other mounted drive letters - I removed all USB sticks and external drives.
  • I confirmed that there are no unmounted volumes on the one internal drive, other than a 15MB one that appears to be a boot volume and is nearly entirely full but I doubt there's anything much I can do about it.
  • I created a new administrative user and tried using that account instead of my normal administrative account when doing the physical-to-virtual conversion.
  • I did a full checkdisk scan on a reboot - i.e. whilst the C: drive was not in use.
  • I rebooted the Mac OS/X computer that was on the receiving end of the process.
  • And of course, throughout the above I rebooted the Windows laptop many times over.
  • I discovered a "bakk" (yes - double 'k') entry in HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\ProfileList as discussed here, deleted it and rebooted, and same problem (although strangely & interestingly, laptop shutdown and boot times were much improved, or so it seemed to my subjective impression!).
  • I also verified that the other SIDs in that registry key were all valid (using psgetsid from PSTools as described here).

The VSS service was creating error log entries claiming that there is an "Unexpected error calling routine ConvertStringSidToSid" (0x80070539).  This is what led me to wonder if checkdisk might be needed - but that didn't help - and is also what led me to the MaxTokenSize trick linked above, which also didn't help.  However, after deleting that "*.bakk" registry entry, the VSS service has produced no more error messages - yet the error message from the VMware Converter remains the same, blaming VSS.

It's not inspiring that multiple other people online seem to have given up on this one.  (e.g.)

I also report with disappointment that this was not my first unfavourable experience with this supposedly easy physical-to-virtual conversion tool.  About a week ago I converted a Surface Pro 2 (Windows 8.1) physical to virtual prior to sending it in for servicing.  I also had a lot of trouble then, although finally managed to get it to work.  Two bad experiences out of two attempts.  Relatively useless documentation.  Sadly, this is my experience of VMware so far : It is awesome when it works, and an extreme pain when it doesn't, which is far too often.

VMware Fusion PC Migration Assistant requires HFS

The skinny :

The VMware Fusion PC Migration Assistant must have an HFS-formatted drive as the destination for the physical-to-virtual conversion.

The fat :

Thanks VMware for the helpful error messages - not!

Here's the error message I got trying to convert a four-and-a-half-year-old laptop to a virtual machine :

"VMware Fusion was unable to share a folder to receive your migrated pc"

Hmmm.

Retry.

Same problem.

Google.

An obtuse comment, which leads me to suspect... voila!

I was trying to convert the physical machine into a virtual machine on an exFAT drive.

Try again, converting it to an HFS drive for later copying to the exFAT destination - problem solved!

Friday, July 11, 2014

VMware Fusion on exFAT

The skinny :

VMware Fusion wastes 30 seconds of your life accomplishing apparently absolutely nothing every time you start or resume-from-disk a virtual machine on an exFAT volume.

There appears to be no workaround.

The fat :

I love virtualization.  And the more I use it, the more I love it.

It has come a long way.

I remember 12 years ago giving up in frustration when I tried to virtualize my entire software development environment.

The big issue back then was the storage system.

Now with so many so-awesome SSDs to choose from, virtualization is a pleasure.

I'm running my virtualized software development environment with I'd guess about a 10-20% performance penalty, vs the literally 200-2000% performance penalty I was experiencing 12 years ago.

And since computers are generally so fast these days, 10-20% performance penalty largely translates into no significant difference at all.

Mac & PC

I happen to be a dual-mode man.  Surface Pro 2 256GB (love it!), and Mac Mini quad-core i7 16GB RAM.  I mainly do Windows stuff, and prefer the more extensive range of keyboard shortcuts in Windows for maximum productivity (whereas with Mac you are forced to keep moving your hand back to the mouse/trackpad whether you like it or not).  But I'm very comfortable and proficient in both environments.

I wonder, could I...  Get an external SSD and put my virtual machines on it.  Then, using VMware Fusion on the Mac and VMware Player or Workstation in Windows, I can run my virtual machines on the faster more powerful Mac Mini when inside, whilst still being able to take my work on the road and run exactly the same VMs on the Surface Pro 2.

Great idea!

But so many snags.

First off, you are going to encrypt that external SSD, right?  I mean, nobody puts sensitive data on an unencrypted external storage device I hope?  (Other than government departments of course - but who expects competence from them?)

Problemo : Windows has BitLocker, and Mac has FileVault, and ne'er the twain shall meet!

And whilst you can get cool third-party utilities to read+write Mac disk format from Windows, and likewise to read Windows disk format from Mac, none of these utilities support encrypted volumes.  Major problem!  Looks like we're scuttled right at the git go!

Well, there is TrueCrypt of course.  It's discontinued, there are question marks over its actual security level, and it's open-source, which leaves one wondering whether it might just go ahead and destroy all your data mysteriously and irrecoverably due to some strange previously-unencountered bug (or worse yet a known bug that has been languishing in the support queue for years, as happens sometimes with open-source and even commercial products).

But, TrueCrypt appears to be the only strong contender.  After all, we need something cross-platform, and that alone rules out a bunch of options.  And whilst performance and reliability have to be assessed, we do know that TrueCrypt has zillions of users, so we take the punt that it'll do the job well - of course making regular VM backups just in case.

Next snag : Filesystem?  I end up opting for exFAT, because it is natively supported in read+write mode by both Windows and OS/X.

Score!  My VMs run fast, and yes, I can transfer the external SSD back & forth between Mac Mini and Surface Pro 2 and it all works!

At this point, I'm super-excited.

However, I'm bothered by something.

It seems that every time I build a VM on the Mac Mini, then run it on the SP2, then take it back to the Mac Mini, there is a strange 30 second delay at the very start of booting the VM.

I Google - no answers.

It's been frustrating me around a month now, but I finally found the answer.

It has nothing directly to do with whether the VM is in the "Shared Virtual Machines" vs "Virtual Machines" folder, and nothing directly to do with running the VM on the Windows host.

It seems to be wholly & solely the VM being on an exFAT volume.

I can take the VM that has the 30 second delay at the very start of the boot process, copy it to a native Mac partition, and it boots immediately without the 30 second delay.

I tried some configuration tweaks that seemed unlikely to help, and indeed they didn't help.

My conclusion?  Lovely conceptually as it is to be able to share VMs twixt Mac & Windows via this external SSD, it's proving all told a little on the painful side.  Not hugely painful, but having to connect the SSD, run TrueCrypt, mount the TrueCrypt volume, use VMs, unmount the TrueCrypt volume, unmount the SSD - that's a little tedious - and now add that there is an absolute waste of 30 seconds of your life at the very outset of every VM boot (and I tend to be starting & stopping VMs a lot throughout the day), and it gets a little frustrating.

The VMs once running run just fine, so it's clearly something VMware could fix.

But I found no-one else anywhere else mentioning the same problem, so I doubt it's even on VMware's radar.

I'll stick with the system for now - it does work - but I'm thinking of changing to a setup where both the Mac and the SP2 have a full copy of all VMs, on their native filesystems with their native full-disk-encryption technologies, and using any of the zillions of backup / file-replication utilities out there so that when I run the VM on one and shut it down, any changes get copied across to the other copy of the same VM.  If that works, then I'll have a truly blissful and hassle-free experience of using the same VMs on two different machines, one being an OS/X host and the other being a Windows host.

The big problem then will simply be storage space disparity.  I might end up paying the small fortune to get a 512GB Surface Pro 2 or 3, just so I can fit everything.  Or else I might use an external SSD just for the less-frequently-used VMs.  Or I might augment the SP2's storage with an external SSD encrypted with BitLocker and formatted with NTFS and used only by the SP2.

The short of it?  VMware Fusion runs VMs off exFAT partitions just fine, but for no apparent reason will waste 30 seconds of your life every time you boot a VM, at the very start of the boot process, before even the VMware Fusion logo pops up on the VM screen.  Please fix it, VMware!

P.S. The problem does not occur with VMware Workstation / Player - i.e. on the Windows host, the 30 second boot delay does not occur.  It is only a problem with VMware Fusion.

P.P.S. I'm using VMware Fusion 6 Professional (paid) and VMware Workstation 10 (trial), about to change to the latter being VMware Player Plus (using the VMware Player Plus license that comes free with VMware Fusion 6 Professional).

Saturday, April 26, 2014

Surface Pro 2 "System Interrupts" high CPU

The Surface Pro 2 is excellent hardware, but the software lets it down in various ways.

All told, it's still the best Windows machine I've ever owned, but it definitely needs a lot more work.

Here's a weird thing I hit today : I was losing around 15%-25% CPU on "System" and "System Interrupts".

I had installed c. 1.5GB of updates in the last several days - maybe that caused it?

Hmmm - there was one other thing I changed, this morning.  Maybe that caused it?

I changed it back - and the problem went away!

So : if your Surface Pro 2 runs fine after a reboot, but is wasting a lot of CPU (and hence slashing battery runtime to just an hour or two) with "System" and "System Interrupts" after putting the machine to sleep and then waking it up, there is a chance it might have something to do with this :

This morning, I went into Device Manager and disabled "Allow this device to wake the computer" in the "Power Management" tab in the Device Properties for each of the two "Mice and other pointing devices".  (I suspect one was the touchscreen and the other the Type Cover's trackpad, but they both simply showed as "HID-compliant mouse".)

I had already - weeks or months prior - done the same for the Keyboards, but that had seemed to have no effect at the time.

What I was trying to accomplish was have it that any accidental keypresses or mouse movements would not wake the device from sleep.  I wanted to know that only pressing the power button, or perhaps the Windows button, would wake it.

I at last had that behavior!  When I had only disabled wake-the-computer for the keyboards but not the mice, the keyboard would still wake the device.  I guess that because the Type Cover has both keyboard and mouse together, I must've needed to disable the wake-the-device option for both or else power would remain to both and both would remain able to wake the device.

So this morning I was very happy for a short while, because no action on the Type Cover would wake the device.  Just what I wanted!  But I quickly discovered serious side-effects.  nircmd no longer worked to turn the screen off.  And this "System" and "System Interrupts" high CPU usage bug!!!

I lost a few hours trying to get things working, and finally realized the "System" and "System Interrupts" high CPU usage might've been a strange side-effect from disabling the allow-to-wake-computer options for those two mice.  I reverted that setting, and voila - nircmd resumed working and the "System" and "System Interrupts" problem went away without even a reboot!

Don't ask me how to fix the problem in your case, and don't ask me why changing that particular setting would have such seemingly unrelated side-effects, but if you're desperately searching for clues, my case adds a few more.  I hope it helps someone!