After a fairly thorough search for open-source .NET libraries for text file (e.g. source code) differencing, I've concluded that there are only two serious contenders :
Both look like they would meet my needs.
DiffPlex is C# only, whereas google-diff-match-patch contains equivalent implementations in Java, Javascript, C++, Objective-C, and more, so if you like the idea of learning an API once and using it e.g. in iOS projects or in web browsers (Javascript) directly, google-diff-match-patch is for you.
The DiffPlex API seems a little nicer if all you want is a simple diff.
google-diff-match-patch supports - as its name suggests - producing and processing patch files - so if you need the extra features, google-diff-match-patch wins again.
DiffPlex contains what appears to be a very nice & simple API to drive diff viewers. But the google-diff-match-patch does have some similar thing, even if the API is not as nice.
Both support a line-by-line mode.
google-diff-match-patch has a nice feature where it can simplify diffs down from "perfect" diffs to more semantically-meaningful diffs. It calls this a "cleanup" operation, and depending on your needs, that could be a deciding feature. My immediate needs are so simple that even cleanup isn't relevant, but if its relevant for you, google-diff-match-patch might be the go (unless I missed a similar feature in DiffPlex, but I'm pretty sure I didn't miss that feature).
In short, it seems both are suitable. DiffPlex has a nicer API for the world of .NET (e.g. C#-style naming conventions used throughout) whilst google-diff-match-patch has more features. For my needs - an open-source, native .NET differencing library - both libraries look very suitable and DiffPlex looks a little easier to learn and use (not that either are hard). But I think in the end I'm going to start with google-diff-match-patch on account of the multiple platforms it supports with a uniform API, and the cleanup facility which whilst not relevant immediately is perfect for something I'm planning to do in the future...
If you know of any other serious contenders, let me know, but I'm only interested in native .NET open-source libraries that can be downloaded and used without modification (so that excludes repurposing code in open-source diff viewers). And I did review a few options on Code Project but nothing there compelled me to believe their performance would be any better than the two projects I shortlisted, whereas I expect that these two shortlisted projects will have much better ongoing support.
Posted largely for my own future reference, but also to help other wandering developers. :o)
Wednesday, November 2, 2011
Tuesday, May 24, 2011
aspnet_merge unresolved assembly reference not alllowed in ASP.NET 4
Hopefully this'll save someone else some time.
I have a moderately large VB.NET ASP.NET website, originally created in ASP.NET 2 and recently upgraded to ASP.NET 4.
I recently built my own packaging scripts that use aspnet_compiler and aspnet_merge.
I was careful to use the .NET Framework v4 version of aspnet_compiler.
When running aspnet_merge on the precompiled website, I got a very strange error :
This page helped me find the correct version. On my computer, that's in :
C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin\NETFX 4.0 Tools\aspnet_merge.exe
HTH :o)
I have a moderately large VB.NET ASP.NET website, originally created in ASP.NET 2 and recently upgraded to ASP.NET 4.
I recently built my own packaging scripts that use aspnet_compiler and aspnet_merge.
I was careful to use the .NET Framework v4 version of aspnet_compiler.
When running aspnet_merge on the precompiled website, I got a very strange error :
Utility to merge precompiled ASP.NET assemblies. Version 3.5.30729.The problem was pretty obvious, but stumped me for probably an hour or so : I was using the wrong version of aspnet_merge.exe.
Copyright (c) Microsoft Corporation 2007. All rights reserved.
aspnet_merge: error occurred: An error occurred when merging assemblies: Unresolved assembly reference not allowed: Microsoft.VisualBasic.
This page helped me find the correct version. On my computer, that's in :
C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin\NETFX 4.0 Tools\aspnet_merge.exe
HTH :o)
Thursday, April 14, 2011
Bazaar repository bloat - rebase, merge, push, pull
UPDATE : Repository bloat (at least in the merge-then-merge-back scenario) can be solved very easily : "bzr pack --clean-obsolete-packs". The content of this article is interesting in the things it examines, but somewhat outdated by this update. Read at own risk...
Bazaar is awesome. I say that almost every time I talk about Bazaar. I love it.
However, there are some use cases where you can end up with repository bloat, completely unnecessarily.
Repository bloat occurs when Bazaar decides - for whatever reason - to make a new version of an existing revision, and thereby duplicate data that was in the revision.
So for example, you have a 5MB file and you add it to a branch, and then merge the branch with the trunk. Trunk's repository size should increase by roughly 5MB, you say? Well, should, you're right, but depending on how you do it, you can actually end up with a 10MB increase instead.
Repository bloat.
So how do you avoid it?
Well, I've noticed repository bloat in two main situations (although there are likely more). Both situations involve the trunk and branch diverging - so if your workflow is such that the trunk and branch are always sync'ed before divergence happens (i.e. when there are only changes on one side or the other but not both), then repository bloat won't be a problem for you (but rare will be workflows where you can guarantee that!)
Here are the two repository bloat scenarios I've noticed :
1) rebase : your revisions get rewritten to your local repository.
e.g.
md trunk
cd trunk
bzr init
echo Hi>readme.txt
bzr add
bzr commit -m "trunk commit"
cd..
bzr branch --stacked trunk branch
cd branch
(put 5MB file called BigFile.dat in branch folder)
bzr add
bzr commit -m "Added BigFile.dat in branch"
So far so good. And if you rebase at this point, you're fine ('coz nothing will happen).
But if we continue :
cd ../trunk
echo A change in trunk>>readme.txt
bzr commit -m "Another trunk commit"
cd ../branch
bzr rebase
... well, the rebase runs just fine, but if you check the size of the .bzr folder in the branch, it is around 10MB, not 5!
Repository bloat!
How to avoid repository bloat when rebasing?
Well, the conslusion I've come to is : let the repository bloat, and merge or push to trunk, and then follow my instructions on purging stacked branches to remove the bloat. (The bloat in rebase cases is only in the branch, not the trunk. And fortunately, it seems that pushing the bloated repository to the trunk only pushes the new versions of the affected revisions instead of pushing both old and new versions - i.e. the bloat is fortunately not propagated back to the trunk in this case.)
(Not using stacked branches? Sorry, not my use case, so I haven't investigated further and thus can't tell you for sure what will work - although if you get really really desperate you can make a new branch --no-tree and then delete the .bzr folder in your existing branch and replace it with the .bzr folder in the new branch. Again - only do that at a point where trunk and branch are in-sync.)
2) merge to branch then merge to trunk
This is a pretty standard operation if you've been working on your branch for a while and the trunk has changed in the meantime.
You can't pull the trunk changes into the branch. Once the two are out-of-sync, you're forced to use merge or rebase. The rebase scenario is covered above, and results in duplication of data in branch revisions from the point of divergence onwards.
The merge scenario is what we're covering here. Its repository bloat characteristics are more interesting. Whereas rebase results in duplication of data in BRANCH revisions from the point of divergence onwards, merge can result in duplication of data in TRUNK revisions from the point of divergence onwards, assuming that you proceed to merge branch back into trunk. (If you PUSH branch back into trunk, I suspect (but haven't tested) that you'll get away without repository bloat - but then you lose the trunk's unique perspective on the change history - i.e. your log and qlog are thereafter from the branch's perspective instead of from the trunk's perspective.)
e.g.
md trunk
cd trunk
bzr init
echo Hi>test.txt
(add 5MB file into trunk folder)
bzr add
bzr commit -m "Initial commit in trunk"
cd..
bzr branch --stacked trunk branch
cd branch
echo bla>test2.txt
bzr add
bzr commit -m "First commit in branch"
cd ..
cd trunk
(replace 5MB file in trunk folder with a different 5MB file of same name)
bzr commit -m "Modified BigFile.dat"
cd ..
cd branch
OK - so far so good - but trunk and branch have diverged and now we're at the point we want to make them converge. Normally we might do :
bzr merge ../trunk
bzr commit -m "Merged trunk changes into branch"
cd ..
cd trunk
bzr merge ../branch
bzr commit -m "Merged branch into trunk"
... but if you do that, you'll get our lovely friend Repository Bloat(TM)!
Why?
Well, it seems that merging the 5MB file's modification revision in from trunk to branch, which requires a commit, results in that 5MB file's data ending up in a second revision, and when we merge back into trunk, that second revision ends up in the trunk's repository. (Interestingly, does not happen if the file was newly created in the trunk - just if it was already known to the branch and was updated in the trunk.)
10MB repository growth for a 5MB file. Baaaaad.
(To emphasize : the final trunk repository size is 15MB : 5MB after initial commit of the 5MB file, then a further 5MB totalling 10MB after second commit to trunk, and finally a third 5MB totalling 15MB after merging in from branch and committing again.)
We saw how to get around it with the rebase bloat problem. How to get around it with the merge bloat problem?
One way is to avoid the merge-then-merge-back entirely. If trunk has changed and you can't pull the changes into the branch because trunk and branch have diverged, then rebase instead. You might/will end up with branch repository bloat, but I cover how to deal with that in the preceding section on repository bloat caused by the rebase operation.
All a bit tedious? Perhaps. But easily scriptable.
Of course, if your workflow relies on the merge process, you might just have to accept the bloat. Not ideal. You might be able to avoid the bloat by using the merge -c option when merging back into trunk, to "cherry-pick" only the branch revisions that are not themselves merge-from-trunk commits. And there are yet more desperate approaches one could take if needed - e.g. export branch changes to a patch set, delete branch, recreate it from trunk and apply patches!!! Well y'know, it would probably work.......
And maybe I need my head checked, but even with a few little problems like this, I still absolutely love Bazaar. (Yes - relatively little. In practice, does it matter if your repository is twice the size it needs to be? Sometimes yes, usually no. For me, it's a little more critical than for others due to certain peculiar circumstances, and hence my investigations in how to avoid/resolve repository bloat.) Thanks for stopping by! :o)
Bazaar is awesome. I say that almost every time I talk about Bazaar. I love it.
However, there are some use cases where you can end up with repository bloat, completely unnecessarily.
Repository bloat occurs when Bazaar decides - for whatever reason - to make a new version of an existing revision, and thereby duplicate data that was in the revision.
So for example, you have a 5MB file and you add it to a branch, and then merge the branch with the trunk. Trunk's repository size should increase by roughly 5MB, you say? Well, should, you're right, but depending on how you do it, you can actually end up with a 10MB increase instead.
Repository bloat.
So how do you avoid it?
Well, I've noticed repository bloat in two main situations (although there are likely more). Both situations involve the trunk and branch diverging - so if your workflow is such that the trunk and branch are always sync'ed before divergence happens (i.e. when there are only changes on one side or the other but not both), then repository bloat won't be a problem for you (but rare will be workflows where you can guarantee that!)
Here are the two repository bloat scenarios I've noticed :
1) rebase : your revisions get rewritten to your local repository.
e.g.
md trunk
cd trunk
bzr init
echo Hi>readme.txt
bzr add
bzr commit -m "trunk commit"
cd..
bzr branch --stacked trunk branch
cd branch
(put 5MB file called BigFile.dat in branch folder)
bzr add
bzr commit -m "Added BigFile.dat in branch"
So far so good. And if you rebase at this point, you're fine ('coz nothing will happen).
But if we continue :
cd ../trunk
echo A change in trunk>>readme.txt
bzr commit -m "Another trunk commit"
cd ../branch
bzr rebase
... well, the rebase runs just fine, but if you check the size of the .bzr folder in the branch, it is around 10MB, not 5!
Repository bloat!
How to avoid repository bloat when rebasing?
Well, the conslusion I've come to is : let the repository bloat, and merge or push to trunk, and then follow my instructions on purging stacked branches to remove the bloat. (The bloat in rebase cases is only in the branch, not the trunk. And fortunately, it seems that pushing the bloated repository to the trunk only pushes the new versions of the affected revisions instead of pushing both old and new versions - i.e. the bloat is fortunately not propagated back to the trunk in this case.)
(Not using stacked branches? Sorry, not my use case, so I haven't investigated further and thus can't tell you for sure what will work - although if you get really really desperate you can make a new branch --no-tree and then delete the .bzr folder in your existing branch and replace it with the .bzr folder in the new branch. Again - only do that at a point where trunk and branch are in-sync.)
2) merge to branch then merge to trunk
This is a pretty standard operation if you've been working on your branch for a while and the trunk has changed in the meantime.
You can't pull the trunk changes into the branch. Once the two are out-of-sync, you're forced to use merge or rebase. The rebase scenario is covered above, and results in duplication of data in branch revisions from the point of divergence onwards.
The merge scenario is what we're covering here. Its repository bloat characteristics are more interesting. Whereas rebase results in duplication of data in BRANCH revisions from the point of divergence onwards, merge can result in duplication of data in TRUNK revisions from the point of divergence onwards, assuming that you proceed to merge branch back into trunk. (If you PUSH branch back into trunk, I suspect (but haven't tested) that you'll get away without repository bloat - but then you lose the trunk's unique perspective on the change history - i.e. your log and qlog are thereafter from the branch's perspective instead of from the trunk's perspective.)
e.g.
md trunk
cd trunk
bzr init
echo Hi>test.txt
(add 5MB file into trunk folder)
bzr add
bzr commit -m "Initial commit in trunk"
cd..
bzr branch --stacked trunk branch
cd branch
echo bla>test2.txt
bzr add
bzr commit -m "First commit in branch"
cd ..
cd trunk
(replace 5MB file in trunk folder with a different 5MB file of same name)
bzr commit -m "Modified BigFile.dat"
cd ..
cd branch
OK - so far so good - but trunk and branch have diverged and now we're at the point we want to make them converge. Normally we might do :
bzr merge ../trunk
bzr commit -m "Merged trunk changes into branch"
cd ..
cd trunk
bzr merge ../branch
bzr commit -m "Merged branch into trunk"
... but if you do that, you'll get our lovely friend Repository Bloat(TM)!
Why?
Well, it seems that merging the 5MB file's modification revision in from trunk to branch, which requires a commit, results in that 5MB file's data ending up in a second revision, and when we merge back into trunk, that second revision ends up in the trunk's repository. (Interestingly, does not happen if the file was newly created in the trunk - just if it was already known to the branch and was updated in the trunk.)
10MB repository growth for a 5MB file. Baaaaad.
(To emphasize : the final trunk repository size is 15MB : 5MB after initial commit of the 5MB file, then a further 5MB totalling 10MB after second commit to trunk, and finally a third 5MB totalling 15MB after merging in from branch and committing again.)
We saw how to get around it with the rebase bloat problem. How to get around it with the merge bloat problem?
One way is to avoid the merge-then-merge-back entirely. If trunk has changed and you can't pull the changes into the branch because trunk and branch have diverged, then rebase instead. You might/will end up with branch repository bloat, but I cover how to deal with that in the preceding section on repository bloat caused by the rebase operation.
All a bit tedious? Perhaps. But easily scriptable.
Of course, if your workflow relies on the merge process, you might just have to accept the bloat. Not ideal. You might be able to avoid the bloat by using the merge -c option when merging back into trunk, to "cherry-pick" only the branch revisions that are not themselves merge-from-trunk commits. And there are yet more desperate approaches one could take if needed - e.g. export branch changes to a patch set, delete branch, recreate it from trunk and apply patches!!! Well y'know, it would probably work.......
And maybe I need my head checked, but even with a few little problems like this, I still absolutely love Bazaar. (Yes - relatively little. In practice, does it matter if your repository is twice the size it needs to be? Sometimes yes, usually no. For me, it's a little more critical than for others due to certain peculiar circumstances, and hence my investigations in how to avoid/resolve repository bloat.) Thanks for stopping by! :o)
Purging stacked branches in Bazaar
Stacked branches are awesome!
Shared repositories go so far, but don't work so well if the parent and child branches are far away from each other in the file system (nor if they are on different volumes), and shared repositories have the weakness that if you create a revision, it lives on forever, even if you later delete the branch associated with that revision. (You can't actually get the revision back, not by any way I've found (UPDATE : "bzr heads --all" looks like it lets you find "lost" revisions.), but the shared repository's size never goes down - it just keeps accruing more and more data, never letting any of it go. (UPDATE : I'm no longer entirely sure when the repository's size changes - "bzr pack --clean-obsolete-packs" does wonders))
In contrast, stacked branches can be used at any time both the parent and child branch are simultaneously accessible (even if they're on different hard disks or even one on a URL), and best of all, if you make an experimental branch and decide to kill it, bam! - its history is gone forever and your trunk repository isn't forever bloated by the revisions you decided to nuke.
And they're extremely useful if you want the same library to be in multiple apps (in different Bazaar repositories) and want to be able to edit the source code in each copy of the library independently but have them all closely associated.
And did I mention they save a lot of storage space?
But thence cometh the problem : stacked branches start out tiny, because they aren't carrying the five decades of history that the trunk contains, but after that they grow.
And grow.
What if you just want the stacked branch repositories to stay nice and trim, like they were when you made them?
There doesn't seem to be any built-in feature in Bazaar to do that.
push, pull, merge, do whatever you want - the stacked branch's repository only grows.
So we resort to a little bit of - very effective - skullduggery.
FIRST UP, ENSURE YOU TRY THIS EXPERIMENTALLY FIRST. It worked for me, but might destroy you and your world and your company's beautiful source code and get you fired. THIS USES UNDOCUMENTED TRICKS. So it could stop working when new versions of Bazaar roll out. I have and accept no responsibility for what happens to you if you try this yourself!
1) Purging the stacked branch history obviously needs to be done at times that the stacked branch is in-sync with the trunk. So make sure you've merged or pushed the branch into the trunk.
2) In the branch, delete all files in these two folders :
.bzr\repository\indices
.bzr\repository\packs
3) Still in the branch, locate this file :
.bzr\repository\pack-names
... and change its content to the following five lines :
B+Tree Graph Index 2
node_ref_lists=0
key_elements=1
len=0
row_lengths=
Voila! Do a bzr status or bzr log and the history is all there - its just now coming from the stacked-on branch like you wanted all along. You have successfully purged the stacked branch's history.
Shared repositories go so far, but don't work so well if the parent and child branches are far away from each other in the file system (nor if they are on different volumes), and shared repositories have the weakness that if you create a revision, it lives on forever, even if you later delete the branch associated with that revision. (You can't actually get the revision back, not by any way I've found (UPDATE : "bzr heads --all" looks like it lets you find "lost" revisions.), but the shared repository's size never goes down - it just keeps accruing more and more data, never letting any of it go. (UPDATE : I'm no longer entirely sure when the repository's size changes - "bzr pack --clean-obsolete-packs" does wonders))
In contrast, stacked branches can be used at any time both the parent and child branch are simultaneously accessible (even if they're on different hard disks or even one on a URL), and best of all, if you make an experimental branch and decide to kill it, bam! - its history is gone forever and your trunk repository isn't forever bloated by the revisions you decided to nuke.
And they're extremely useful if you want the same library to be in multiple apps (in different Bazaar repositories) and want to be able to edit the source code in each copy of the library independently but have them all closely associated.
And did I mention they save a lot of storage space?
But thence cometh the problem : stacked branches start out tiny, because they aren't carrying the five decades of history that the trunk contains, but after that they grow.
And grow.
What if you just want the stacked branch repositories to stay nice and trim, like they were when you made them?
There doesn't seem to be any built-in feature in Bazaar to do that.
push, pull, merge, do whatever you want - the stacked branch's repository only grows.
So we resort to a little bit of - very effective - skullduggery.
FIRST UP, ENSURE YOU TRY THIS EXPERIMENTALLY FIRST. It worked for me, but might destroy you and your world and your company's beautiful source code and get you fired. THIS USES UNDOCUMENTED TRICKS. So it could stop working when new versions of Bazaar roll out. I have and accept no responsibility for what happens to you if you try this yourself!
1) Purging the stacked branch history obviously needs to be done at times that the stacked branch is in-sync with the trunk. So make sure you've merged or pushed the branch into the trunk.
2) In the branch, delete all files in these two folders :
.bzr\repository\indices
.bzr\repository\packs
3) Still in the branch, locate this file :
.bzr\repository\pack-names
... and change its content to the following five lines :
B+Tree Graph Index 2
node_ref_lists=0
key_elements=1
len=0
row_lengths=
Voila! Do a bzr status or bzr log and the history is all there - its just now coming from the stacked-on branch like you wanted all along. You have successfully purged the stacked branch's history.
Wednesday, December 1, 2010
WinMount writes VDI files (VirtualBox virtual hard disk files)
I was having extraordinary difficulty copying a 4GB file from my host to a virtual machine.
VirtualBox shared folders weren't working (client add-ons unable to be installed).
Windows file sharing, which usually did the trick, had frozen due to my host getting its nappy in a knot, and rebooting wasn't a suitable option.
I had IIS (a web and FTP server) on my host, and tried sharing the file through that, only to discover that just over 2GB into the transfer, the transfer froze - obviously a bug with something somewhere using a 32-bit signed integer and dying at the moment the transfer reached the highest number that can be represented in 31 unsigned bits.
I tried copying the file onto USB external hard disk, then mapping the external hard disk to the virtual machine via VirtualBox's USB mapping, but that too caused much grief and many hangs (not complete system hangs, but VirtualBox hangs).
I did finally, FINALLY, manage to get it to work.
I created a virtual hard disk using the VirtualBox media manager. I happened to use an auto-expand disk so that it would only use as much of my host's storage as required.
I then tried a trial version of the commercial WinMount program. It happily mounted my auto-expanding VDI virtual hard disk, mapped it to a drive letter, and I was able to copy my huge file onto the virtual hard disk.
Then I exited WinMount, mapped the virtual hard disk as a secondary disk to the virtual machine I was trying to transfer the file to, rebooted the virtual machine, and voila, I was able to copy the file from the temporary second virtual hard disk onto the virtual hard disk I wanted it to be on.
VERY convoluted, but we got there.
One weird thing : Norton "security advisor" (or whatever it's called) warned me that the WinMount website is a known source of viruses and/or trojan horses. So, I dunno, my computer might be laced with baddies now. But even though Norton warned about the website, it didn't complain about the WinMount program itself.
In short, if you're wanting to mount a virtual hard disk for read/write access in Windows - especially if it's an auto-expand virtual hard disk - WinMount commercial edition seems to do a very nice job. And it was the only tool I found that even claims to support writing to auto-expand virtual hard disks. So it seems the authors of this tool have managed to accomplish something pretty special.
But of course, don't use it in read+write mode on a virtual hard disk you can't afford to lose, just in case it busts it! i.e. if the virtual hard disk is dear to you, then make a backup before trying ANY 3rd-party tool that purports to provide read+write access! Or so I advise. :o)
Thanks WinMount - I'm puzzled by the virus/trojan warnings, and your on-screen instructions are slightly Chinglishy, but aside from that, your product seems to be very good indeed.
VirtualBox shared folders weren't working (client add-ons unable to be installed).
Windows file sharing, which usually did the trick, had frozen due to my host getting its nappy in a knot, and rebooting wasn't a suitable option.
I had IIS (a web and FTP server) on my host, and tried sharing the file through that, only to discover that just over 2GB into the transfer, the transfer froze - obviously a bug with something somewhere using a 32-bit signed integer and dying at the moment the transfer reached the highest number that can be represented in 31 unsigned bits.
I tried copying the file onto USB external hard disk, then mapping the external hard disk to the virtual machine via VirtualBox's USB mapping, but that too caused much grief and many hangs (not complete system hangs, but VirtualBox hangs).
I did finally, FINALLY, manage to get it to work.
I created a virtual hard disk using the VirtualBox media manager. I happened to use an auto-expand disk so that it would only use as much of my host's storage as required.
I then tried a trial version of the commercial WinMount program. It happily mounted my auto-expanding VDI virtual hard disk, mapped it to a drive letter, and I was able to copy my huge file onto the virtual hard disk.
Then I exited WinMount, mapped the virtual hard disk as a secondary disk to the virtual machine I was trying to transfer the file to, rebooted the virtual machine, and voila, I was able to copy the file from the temporary second virtual hard disk onto the virtual hard disk I wanted it to be on.
VERY convoluted, but we got there.
One weird thing : Norton "security advisor" (or whatever it's called) warned me that the WinMount website is a known source of viruses and/or trojan horses. So, I dunno, my computer might be laced with baddies now. But even though Norton warned about the website, it didn't complain about the WinMount program itself.
In short, if you're wanting to mount a virtual hard disk for read/write access in Windows - especially if it's an auto-expand virtual hard disk - WinMount commercial edition seems to do a very nice job. And it was the only tool I found that even claims to support writing to auto-expand virtual hard disks. So it seems the authors of this tool have managed to accomplish something pretty special.
But of course, don't use it in read+write mode on a virtual hard disk you can't afford to lose, just in case it busts it! i.e. if the virtual hard disk is dear to you, then make a backup before trying ANY 3rd-party tool that purports to provide read+write access! Or so I advise. :o)
Thanks WinMount - I'm puzzled by the virus/trojan warnings, and your on-screen instructions are slightly Chinglishy, but aside from that, your product seems to be very good indeed.
Wednesday, June 9, 2010
Telstra NextG USB modem not so good as a backup device
I thought a Telstra NextG wireless broadband (internet) USB modem would be perfect for those occasional trips out to the country.
Keep it in yer bag, recharge at the point of need.
So it sat around waiting to be needed.
I needed it just recently.
But it didn't work.
I phone Telstra.
"The USIM has been deactivated. I'll transfer you to the activations and reactivations department so they can reactivate it for you."
But the lady in the activations + reactivations department gave me the following bad news :
A Telstra NextG data SIM expires permanently after just ONE MONTH without a current data allowance.
I don't recall reading THAT in the promotional material.
I expected to be able to go a month here, a few months there, without using it.
Maybe a permanent expiry after six months of no use would make sense.
But just ONE MONTH?
This means that if you want to use the Telstra NextG USB modem, you have to be buying a new data pack every other month at the very least, even if you're not using the service.
I thought the point of "prepaid" was that you could control exactly what you spent and when!
Apparently I was wrong.
Now, it's not all bad, but it kinda gets worse in one sense...
The lady helpfully informed me that I can just go to a retail outlet and buy another SIM for only $2, then recharge it with whatever denomination I want.
Well, that's nice in terms of getting up & running again now, but that's a lot of hassle if I'm intending to recharge just at the point of need.
I mean, let's keep this in focus : I am willing to pay $20 for a recharge (the minimum recharge amount) for that odd occasion I'm out in the country and suddenly need internet access to provide tech support.
I won't use anywhere near the data allowance on the $20 recharge, I'll typically only use it for tens of minutes, and then it will sit unused for the remainder of its 30 day expiry period and expire mostly unused.
In other words, Telstra's profit margin on my occasional $20 recharge would be STUPENDOUS, and I would still feel like I'm getting value, for the sake of being able to provide tech support on a moment's notice at those critical moments.
But now I'm being told that actually, I need to keep on feedin' the card with recharges, whether I need them or not.
The value proposition of the Telstra NextG device is now plummeting, because its minimum cost has suddenly gone from $99 (the purchase price), to $99 + $120 per year. Not a huge amount, but that's assuming I accept the major hassle of timing my recharges perfectly, and never muck up and have to go through the hassle of getting yet another new SIM. If I take the low-brain-power option (automatic monthly $20 recharges), then we're suddenly talking about $240 per year. Minimum.
As a backup device that would only cost money when needed, and the point of need would justify the cost, it was a valuable proposition.
But a minimum of $240 per year, whether I use it or not?
Sorry, I think I'm going to explore alternatives.
AND I don't think this was adequately disclosed at the point of sale.
The device is not fit for the purpose for which I purchased it, yet I purchased it in good faith based on the representations of and impressions made by the sales and marketing material.
---
On another note, I was ASTOUNDED by the Telstra phone service! I'm used to SHOCKING service. But I received commendable service.
For starters, I rang, and the phone was answered by a HUMAN! This has never happened before!
The human directed my call skillfully.
I waited only a few minutes at most.
An operator apologised for the inconvenience of my NextG device expiring (I didn't make a big deal about it to them, even though it is a big deal, so the fact that they apologised without prompting is nice), and one even said "thanks for your patience with Telstra".
"Thanks for your patience with Telstra"!!! What is the world coming to! A big Telco that understands that we, its customers, are humans, and that they have required us to exercise patience! It's like a great big warm heart has been mystically transplanted into what once was the worst of customer service offenders in the country (or at least, the most notable).
So, I don't know what's happening at Telstra, but something good is happening.
I'm getting answers when I call.
I'm speaking with people I don't have too much trouble understanding.
The list goes on.
It'll take a while, but if they keep up like this, people might even begin to think highly of Telstra.
Like I said, it'll take a while...
Keep it in yer bag, recharge at the point of need.
So it sat around waiting to be needed.
I needed it just recently.
But it didn't work.
I phone Telstra.
"The USIM has been deactivated. I'll transfer you to the activations and reactivations department so they can reactivate it for you."
But the lady in the activations + reactivations department gave me the following bad news :
A Telstra NextG data SIM expires permanently after just ONE MONTH without a current data allowance.
I don't recall reading THAT in the promotional material.
I expected to be able to go a month here, a few months there, without using it.
Maybe a permanent expiry after six months of no use would make sense.
But just ONE MONTH?
This means that if you want to use the Telstra NextG USB modem, you have to be buying a new data pack every other month at the very least, even if you're not using the service.
I thought the point of "prepaid" was that you could control exactly what you spent and when!
Apparently I was wrong.
Now, it's not all bad, but it kinda gets worse in one sense...
The lady helpfully informed me that I can just go to a retail outlet and buy another SIM for only $2, then recharge it with whatever denomination I want.
Well, that's nice in terms of getting up & running again now, but that's a lot of hassle if I'm intending to recharge just at the point of need.
I mean, let's keep this in focus : I am willing to pay $20 for a recharge (the minimum recharge amount) for that odd occasion I'm out in the country and suddenly need internet access to provide tech support.
I won't use anywhere near the data allowance on the $20 recharge, I'll typically only use it for tens of minutes, and then it will sit unused for the remainder of its 30 day expiry period and expire mostly unused.
In other words, Telstra's profit margin on my occasional $20 recharge would be STUPENDOUS, and I would still feel like I'm getting value, for the sake of being able to provide tech support on a moment's notice at those critical moments.
But now I'm being told that actually, I need to keep on feedin' the card with recharges, whether I need them or not.
The value proposition of the Telstra NextG device is now plummeting, because its minimum cost has suddenly gone from $99 (the purchase price), to $99 + $120 per year. Not a huge amount, but that's assuming I accept the major hassle of timing my recharges perfectly, and never muck up and have to go through the hassle of getting yet another new SIM. If I take the low-brain-power option (automatic monthly $20 recharges), then we're suddenly talking about $240 per year. Minimum.
As a backup device that would only cost money when needed, and the point of need would justify the cost, it was a valuable proposition.
But a minimum of $240 per year, whether I use it or not?
Sorry, I think I'm going to explore alternatives.
AND I don't think this was adequately disclosed at the point of sale.
The device is not fit for the purpose for which I purchased it, yet I purchased it in good faith based on the representations of and impressions made by the sales and marketing material.
---
On another note, I was ASTOUNDED by the Telstra phone service! I'm used to SHOCKING service. But I received commendable service.
For starters, I rang, and the phone was answered by a HUMAN! This has never happened before!
The human directed my call skillfully.
I waited only a few minutes at most.
An operator apologised for the inconvenience of my NextG device expiring (I didn't make a big deal about it to them, even though it is a big deal, so the fact that they apologised without prompting is nice), and one even said "thanks for your patience with Telstra".
"Thanks for your patience with Telstra"!!! What is the world coming to! A big Telco that understands that we, its customers, are humans, and that they have required us to exercise patience! It's like a great big warm heart has been mystically transplanted into what once was the worst of customer service offenders in the country (or at least, the most notable).
So, I don't know what's happening at Telstra, but something good is happening.
I'm getting answers when I call.
I'm speaking with people I don't have too much trouble understanding.
The list goes on.
It'll take a while, but if they keep up like this, people might even begin to think highly of Telstra.
Like I said, it'll take a while...
Monday, April 12, 2010
SQL Server refusing to start
Here's a weird one that boggled my brain for a long ol' time.
SQL Server on my development laptop was refusing to start.
I hadn't used it for months, but didn't think I'd changed anything that could affect it.
The error message was plain ol' weird :
"Windows could not start the SQL Server (SQLEXPRESS) on Local Computer. For more information, review the System Event Log. If this is a non-Microsoft service, contact the service vendor, and refer to service-specific error code 3417."
Well, that wasn't weird, but the associated Event Log entry sure was :
"The SQL Server (SQLEXPRESS) service terminated with service-specific error WARNING: You have until SQL Server (SQLEXPRESS) to logoff. If you have not logged off at this time, your session will be disconnected, and any open files or devices you have open may lose data."
Ah - aha? What's all this log off bizzo?
No makey sensey.
Searching the web found only one other website mentioning the problem. One, in the entire world. And their solution was basically to uninstall SQL Server, delete the remnants of the installation folder, and reinstall.
Well, thanks be to God, the answer dawned on me :
To save space on my SSD (where speed was brilliant but size was cramped), I had compressed all my program files.
That works a treat - program files are almost entirely read-only, so both with hard disks and with SSDs of all varieties, compressing program files usually brings a speed improvement, and certainly frees up a lot of disk space.
BUT, I suddenly recalled that SQL Server stores its "master databases" deep inside the Program Files. By itself, that's not a problem. But I also recalled from years ago that SQL Server uses a special type of low-level file system access that is incompatible with NTFS compression. The disk access used by SQL Server is designed to optimise database page caching for uncompressed data files. But it simply cannot work with NTFS-compressed data files. Full stop. (And there are ways to trick it into working, but believe you me, there are substantial write performance implications if you try.)
So I dug into the SQL Server installation folder, and uncompressed the *.mdf and *.ldf files in the Data folder, and all was merriment once more.
For the odd sailor out there cutting close to the wind like me, this might prove helpful. Enjoy! :o)
SQL Server on my development laptop was refusing to start.
I hadn't used it for months, but didn't think I'd changed anything that could affect it.
The error message was plain ol' weird :
"Windows could not start the SQL Server (SQLEXPRESS) on Local Computer. For more information, review the System Event Log. If this is a non-Microsoft service, contact the service vendor, and refer to service-specific error code 3417."
Well, that wasn't weird, but the associated Event Log entry sure was :
"The SQL Server (SQLEXPRESS) service terminated with service-specific error WARNING: You have until SQL Server (SQLEXPRESS) to logoff. If you have not logged off at this time, your session will be disconnected, and any open files or devices you have open may lose data."
Ah - aha? What's all this log off bizzo?
No makey sensey.
Searching the web found only one other website mentioning the problem. One, in the entire world. And their solution was basically to uninstall SQL Server, delete the remnants of the installation folder, and reinstall.
Well, thanks be to God, the answer dawned on me :
To save space on my SSD (where speed was brilliant but size was cramped), I had compressed all my program files.
That works a treat - program files are almost entirely read-only, so both with hard disks and with SSDs of all varieties, compressing program files usually brings a speed improvement, and certainly frees up a lot of disk space.
BUT, I suddenly recalled that SQL Server stores its "master databases" deep inside the Program Files. By itself, that's not a problem. But I also recalled from years ago that SQL Server uses a special type of low-level file system access that is incompatible with NTFS compression. The disk access used by SQL Server is designed to optimise database page caching for uncompressed data files. But it simply cannot work with NTFS-compressed data files. Full stop. (And there are ways to trick it into working, but believe you me, there are substantial write performance implications if you try.)
So I dug into the SQL Server installation folder, and uncompressed the *.mdf and *.ldf files in the Data folder, and all was merriment once more.
For the odd sailor out there cutting close to the wind like me, this might prove helpful. Enjoy! :o)
Subscribe to:
Posts (Atom)