Shock Totem 2 is a go

Hey everyone,

Its been a while since my last posts as things have been obscenely busy at work and I really haven’t had much time to write anything at all.  One thing that has happened over this time though is the publication (at last!) of issue 2 of Shock Totem magazine, which features, amongst other great works, a short story I wrote last year.

This is my first professional fiction publication and I’m quite excited.  The issue has been out a little while now but I was waiting to see my copy before I announced it and I must say – it’s an impressive looking little magazine.  They’ve really done a good job with the production values.

I’d like to take the opportunity to thank Ken and the other Shock Totem staff who work hard and do such a great job putting the magazine together.  It was a pleasure working with them and I hope to do so again some time in the future.

In the meantime, go out at buy it!  Links to buy are at: http://www.shocktotem.com/shop.html

It’s available directly from the publisher and also from Amazon and Barnes & Noble online bookstores.

Edit: If you want to know more, the Journal of Always has the first review up! http://journalofalways.blogspot.com/2010/08/review-shock-totem-2.html

Read it?  Let me know what you think in the comments.

Advertisements

The HTML5 video battle: Part II

This is a quick update to yesterday’s post titled “Google opens the VP8 Codec“.  In that post I described the lay of the land for current software developers interested in the online video space, the battle around which codec will become the defacto standard for HTML5 video, and what it all means for us.

Almost as soon as I posted, it needed updating – the news is coming quick and fast.  All of the involved companies seem to realise that this is a major turning point in the web and they all want it going their way.  Most of the contenders realise that they themselves don’t have the sway to push this issue by themselves and it is quickly devolving into a fight between two camps.  Team Google, and Team Apple, with two different technologies and two completely opposed philosophical outlooks on the way the technology world should work.

In my previous post I accidental missed one of the contenders here, as they are really of only tangential influence in the current debate.  To summarise once more, the five players in this drama are Apple, Google, Microsoft, the Mozilla foundation and Opera Software – each has its own browser with its own dedicated followers and a greater or smaller percentage of the overall browser market: Safari, Chrome, Internet Explorer, Firefox and Opera.

To catch up on the basics of the issues, please view my last post – we will now deal with the rapidly changing landscape and how it already differs from what I laid out yesterday.  There is no doubt that everyone was waiting for this announcement from Google, given the speed that the other players have made their responses.

To start, Firefox and Opera both supported the Ogg Theora codec as the default (and only) HTML5 video codec.  Mozilla have been very vocal on this point, routinely calling for support in preventing H.264 from becoming a patent encumbered defacto standard.  It was announced yesterday that current developer editions, from the nightly builds, of Firefox and Opera also support VP8 – throwing their weight behind Google and the newly opened codec.

These three then, Firefox, Opera and Chrome, represent a majority amount of the browser marketplace.  According to the W3 Schools market share report, these three contenders represent 62.2% of the Browser market share.

It is no surprise then that Microsoft, despite reservations, announced support for VP8 in Internet Explorer 8.  Not native support, it must be said, however they have stated that anyone who has the VP8 codec installed themselves will be able to view VP8 video via the HTML5 video tag in IE8.  This to me is a very defensive position for Microsoft, obviously they’re still worried about patent attacks and so don’t want to bundle a potentially encumbered codec with their own browser.  However this move also shows that they realise they can’t ignore the potential of VP8 to break through H.264’s stranglehold and become the web standard.  They can’t decide which camp to bet on, so they’re betting both.

Whilst native support would have been better, this is still a partial win for VP8 supporters and raises the total Browser market share of VP8 compatible browsers to 78.4%, assuming that VP8 support will be coming only for IE8.  This is a near overwhelming victory, as the remaining market-share is split between IE7 and 6, who combined equal about 17.2% of the market share (hands up if you can’t believe that IE6 still commands 7.9%.  Upgrade people!) and Safari, the last remaining hold-out, with a tiny market share of only 3.7%.

Given that IE7 and 6 are unlikely to be upgraded at all for HTML5 support, we can effectively count out 17.2% of the market from this discussion – they’re unlikely to get any codec until they upgrade. So only 82.1% of the market are actually involved in this discussion at all. What this means then is for all the people who will be able to access HTML5 video, 95.4% will be able to access VP8 (either natively or by installing the codec themselves).The 4.5% using Safari will be the only ones who cannot.

Compare this to Ogg Theora, supported by only Firefox, Chrome and Opera, and you have a potential market of 75% of the browser market.  Still a goodly amount, but you’re missing anyone using Safari or Internet Explorer.

H.264 is supported by Chrome, Safari and Internet Explorer.  This is the only codec safari users can see and is supported by only 40% of the potential marketplace.

Numbers like this would normally mean that VP8 was a clear winner already and the battle was over before it began, H.264 is a no-starter.  What does Apple and the mighty Steve Jobs have to say about this?

According to an article at The Register: VP8 is a bad choice because it will be liable to the same attacks as Ogg Theora on the patent front, and it performs slower and with worse compression than H.264.  Their evidence for the poor performance?  As linked in the article, a paper written by college student supporter of H.264 and contributor to an open source decoder of same, who claims VP8 performs poorly and will not rival H.264 in any way, and the spec is poor and unlikely to be corrected by Google.

This is the reality distortion field in full effect and it will be interesting to see the Jobs supporters rally behind this particular piece of FUD.  I am not qualified personally to judge the quality of VP8 vs H.264.  A quick search on the net shows the world is divided between those that think it is better, those who claim it is not and those who really don’t see any difference.  I think the quality issue for web video is itself a non-starter, the 95% support ratio would, under normal circumstances, push that aside as it has in technological battles before.  Quality of tech generally comes second to ease of use and compatibility with content.  If all the sites you like use VP8, you’re unlikely to care that H.264 is better.

That said, this move shows the supreme arrogance of the Apple community.  What Jobs is saying with a one line email is that we believe VP8 is no threat because a college student said so.  Now this college student might be the worlds best video codec analyst, but… how is anyone to know? One college student’s opinion versus Google’s engineers, On2’s original developers, and every other person who has played with the tech and pronounced it good.

Enough on that, as I said, it’s likely a non-issue.  The real issue is that Apple is holding firm – no VP8 support, H.264 only.  What does this mean for us?

I’d love to call this one for VP8 with overwhelming support, Google standing behind it stating they are completely unafraid of potential patent trolls and 95% of the available market supporting the codec.

We can’t though.  Apple, despite having only a 3.3% share of the browser market, has a 100% share of the iPhone market.  With no support for Flash, currently H.264 is the only way to get video to those devices.  Sure, overall this portion of the market is relatively small, and Android will no doubt have VP8 support – but the issue is this.  This battle will, in the end, be decided by us.  The developers.  If the majority of sites go VP8 as the defacto standard, Apple will more and more feel the pressure to include support for it in the iPhone Safari client as the iPhone users get more and more frustrated at their inability to view those sites.

The question remaining is how many developers are willing to cut out such a, some would say inordinately visible, section of their market?  iPhone uptake is pushing many of the larger sites towards HTML5, and with that as their driver they are unlikely to choose a video codec that the iPhone doesn’t support.

The battle is far from over everyone, and we still can’t call a winner here.  Don’t doubt for a second that Jobs understands the mindshare and brand power he wields with his “magical” devices, and don’t doubt that he will use every ounce of leverage he has to make the world conform to his reality distortion field.

I will continue to update as more news comes in, however for now I think my previous advice still stands.  The safest route is H.264 with a Flash backup for Opera and Firefox who don’t support it.

My preferred option? I have to say i’m weighing in with VP8 here.  I had really thought we’d left behind the time when we coded the same page in different ways to cater for differences in browsers.  We need a standard web, and that means, a standard video codec.

Thoughts anyone? Have I missed anything or am I just plain nuts? Let me know in the comments.

Google opens the VP8 Codec

In case anyone missed the news this morning, at Google’s developers conference they announced the long awaited and suspected open sourcing of the VP8 video codec. In a quick follow-up, Adobe announced the release of a new add-on kit for Dreamweaver CS5 supporting the new(ish) HTML5 tags and utilising VP8 as the video codec of choice for the new <video> tag.

What does this mean for us?

The HTML5 video landscape is a complicated one that has been causing a fair amount of confusion over the last few months.  Basically what we are seeing are the biggest, most important tech companies of the time squaring off against each other over the argument of video codecs, and the whole mess exists because the standards committee do not specify a codec that must be used for interoperability in the standard itself.

There are four key players in this drama and they are the usual suspects, Apple, Microsoft and Google, joined by the Mozilla foundation.  Combined they represent a massive majority of the browser traffic on the web through their four key browser products.  Safari, Internet Explorer, Google Chrome and Firefox.

These browsers are all split in which video codecs they are going to (or already do) support for the video tag in HTML5.  Apple and Microsoft have joined forces and both Safari and Internet Explorer support the H.264 codec exclusively, much to the consternation of many developers and interest groups.  H.264 is, although a great codec, heavily patent encumbered which has financial consequences for anyone wishing to work with the codec.  As many have correctly pointed out, this introduces a potentially insurmountable barrier of entry for many developers and companies who are unable, or unwilling, to pay licence fees required.

Just what licence fees are required and who has to pay them is another scarily murky area.  It’s well known that both Apple and Microsoft are paying heavy licence fees for the right to use the codec in their browsers, but what of developers and content producers? Are they required to pay a licence fee to use the codec?  Opinion is divided on this topic.

In response to this, Mozilla spoke out against the codec and has refused to add support for it into their Firefox browser (though support is being added for non-patent encumbered countries via the Wildfox fork).  Instead, the Firefox browser supports the open source Ogg Theora codec.

Apple has, quite famously,claimed that Ogg Theora is breaching several patents and issued a statement around Ogg Theora, claiming that a “pool of patents” is being drawn together to “go after” Ogg Theora. Whether or not they will personally be involved in this attack is not clear.

Google appears to be taking advantage of the chaos to push uptake of their Chrome browser by taking the sensible route (someone had to) and supporting both codecs.  As well as this, they have now opened VP8 codec as previously mentioned, offering an alternative to both H.264 and Ogg Theora that is guaranteed protection from patent attacks. (At least, until someone attempts to claim that it too violates H.264’s patents).

So what we have is a fractured landscape, now containing three separate codecs.  At this point, from the user’s point of view, Google Chrome has to be the logical choice as it will support all three codecs and thus you wont consistently come across sites whose video you are unable to view.  Given that Mozilla’s stated reason for boycotting H.264 is to “avoid helping uptake and de-facto standardisation of a patent encumbered codec”, it will be interested to see if they write support for VP8 into the Firefox browser.

What this means for developers is more complicated.  By offering a truly free and patent unencumbered codec Google has taken a big step towards standardising the platform and providing a web-video solution that could truly cross all browsers.  Unfortunately for that to work, all of the other browsers will have to come to the party and support VP8 in their own browsers, and in the short term this seems unlikely.  Apple in particular seem devoted to the H.264 codec and will likely fight any solution that seeks to minimise its use.  The fact that they have been visibly antagonistic towards Google over the last few months is unlikely to help matters.

So unfortunately, the best choice for developers in the current situation is probably H.264.  It is supported currently by all major browsers except for Firefox as a straight HTML5 video tag.  For firefox users, we’re back to where we were years ago, writing pages that display different content for different browsers. H.264 is also one of the codecs supported natively by Flash, so a solution that wont require keeping two different encoded version of each document is to embed a flash player in the page when the browser is detected as Firefox.

What we’ll have then is a page that can be viewed on all browsers except Firefox on HTML5 (including iPhone and iPad browsers), and viewed using a flash player plugin on Firefox.

For the near future that seems to be the best option.  It does leave the question of content creator licensing open and it certainly isn’t optimal, but barring an unusual act of respect for developers and open standards on the part of Apple, it is likely to be the situation we are stuck with for some time.

Debugging NUnit tests under Visual Studio 2010

Upgrading to a new version of a development framework always holds some little surprises and gotcha’s, no matter how careful you are.  I found one of these the other day during the transition of some of my projects from .Net 3.5, and VS2008, to the new Visual Studio 2010 and .Net 4.0.  When I initially upgraded I ran all my unit tests to ensure they worked, and they do – the new version of NUnit supports .Net 4 already which is great news.

I didn’t play around with it too much after that as I had some problems with the Moq mocking framework running in VS2010 (more on that in a post to come), however when I did find the time to make some modifications to the tests and check them I noticed something very strange – I could no longer debug my tests.  They ran fine, but in the debugger I was greeted with an old friend of anyone who has done significant debugging in Visual Studio over the last ten years – Breakpoint will never be reached, no symbols loaded.

I played around for a while and eventually found the problem.  Traditionally debugging NUnit in Visual Studio has been done in one of two ways. First, you can attach the debugger to the NUnit gui manually after launching it, or you can set up the debugger to launch NUnit as an external program to launch on build (so you can F5 it!).  The second method is the one I usually prefer, but unfortunately we can no longer use -either- of these methods.

I’m not sure what has caused this issue but in order to debug NUnit tests in visual studio now you need to manually attach the debugger, as above, but instead of attaching to the NUnit process you need to attach it to the NUnit-Agent process.  This causes problems for people, like myself, who like to hook up NUnit debugging to the F5 key as that will attach you to the loaded program, in this case NUnit, not to the agent process.

Seems that for now the only method is manual attachment. Hopefully this will be changed back in a future version, though it may be some time.

Edit:

Thanks the Klaus for pointing out a better workaround below!

Apparently the real issue is that NUnit loads up in .Net 2.0 and then has to load up the modules in 4.0, so does so under the agent process. (Though why it didn’t have to do this for 3.5 is not clear).  The solution is to force NUnit to run under 4.0 itself.  This can be done with the following addition to the NUnit.exe.config file.

First, directly under the configuration element, add these lines:

<startup>
<requiredRuntime version=”v4.0.30319″ />
</startup>
This sets NUnit to run under the 4.0 CLR.
According to this page you should also add this line under the runtime element:
<loadFromRemoteSources enabled=”true” />
This allows libraries downloaded from a remote site (such as a website) to be loaded with full trust.  I’m not sure why this would be necessary and indeed, my application debugged NUnit just fine without it.  However, if you have problems, you might want to give it a try.
Note that the link given above gives the version number to enter as v4.0.20506 whilst here I’ve said 4.0.30319.  The former is for the .Net 4 beta, whilst the latter is the RTM version.  A good list of version numbers can be found here.
Hope that helps!

Broken SmtpClient in .Net

I’ve been meaning to post about this for a while as I discovered the bug a few weeks ago, SmtpClient in .Net 1.1 – 3.5 is broken.

By broken, I mean non-standards compliant with the SMTP specification.  The SmtpClient object is a fantastic little object that can handle most scenarios you would need for personal apps and sending emails, including multiple encoding formats and SSL handling.  However, it has a flaw – it doesn’t close connections to the server correctly.

The SMTP spec calls for the client to issue the “QUIT” command when finishing up communications.  The SmtpClient object instead, after sending the mail data, returns control back to the calling app and leaves the connection hanging.  Most servers seem to handle this ok but it’s certainly not good practice, and it’s causing me no end of bother now.

I’ve been working on a project called PseudoSmtpServer, which is exactly as it sounds, a fake smtp server designed for Test-Driven Development.  It implements the standards-based smtp commands but instead of possessing the usual smtp internals, it stores all information its sent so that it can be queried by test code to ensure sends occurred properly.

It’s been an interesting project, but I’ve had quite a lot of problems, some of which are due to this SmtpClient bug and some due to the TCPListener not performing as expected (in particular, taking a long time to release a socket.)  This generally leads to performing erratically, seeming to fail without cause generally because the previous test claims to have cleaned up but in reality hasn’t finished releasing the socket for the next test.

I’m currently rewriting my listener to make direct use of sockets rather than relying on TCPListener’s implementation, and I am led to believe that the bug in SmtpClient may finally have been fixed (about time!) with the release of .Net 4.0.  I’m currently trying to upgrade the project to .Net 4.0 but am having difficulty with the Moq libraries.  I’m currently investigating this and will post more later on the topic.

AddressAccessDeniedException thrown when starting a service with ServiceHost on Localhost

I’ve been working my way through Apress’ “Introducing dot Net 4.0 with Visual Studio 2010” to get a handle on what’s new in the latest version of my environment. I skipped this step when moving from 2.0 to 3.5 which led to me missing quite a few great tips, which I was determined would not happen this time.

Working through Chapter 7, WCF Services, I’ve found something the book author seems to have missed. One of his code samples fails to work for some people, including myself. I tracked down the problem to the fact I was running Vista (strangely, I would assume the problem exists then on Windows 7, the authors operating system, but maybe not).

Basically, the example instructs you, having written a service, to create a console app that registers a ServiceHost object and open it, thus starting your service. This is done like this:

ServiceHost Host = new ServiceHost(typeof(YourServiceType), new Uri(“http://localhost:8888/myservice&#8221;); Host.Open();

After this, assuming all goes well, you can browse to localhost:8888/myservice and view the service as published. For me, it didn’t go well. I hit F5 and as the application began to run, it threw an AddressAccessDeniedException claiming that I didn’t have access rights to create an endpoint at that address.

This is confusing of course, until you realise that Vista requires you to elevate your permissions to do various things, one of which is open ports and create address endpoints. F5 in Visual Studio is completely unable to perform this task.

I have found two solutions to this issue.

The first, given that in this particular example we’re using a console app, is to open a command prompt with full administrator access (Run As Administrator), change directory to the bin/debug directory of the program and run it from there. It works fine.

The second is to open Visual Studio itself with full administrator access and then F5 will work as expected. It’s an interesting little bug/feature and worth keeping in mind next time you stumble over an unexpected access denied issue.