Just Because You Can Do a Thing, Does not Mean You Should Do a Thing

Microsoft Test Manager 2010 (MTM), the testing and lab management tool extension to Microsoft’s ALM product – Team Foundation Server 2010, is perhaps an example of bad user interface design.  I find the look and feel to be somewhat foreign to user’s familar with Visual Studio but also arguably anything else that Microsoft has made historically.

NewImage

One of my personal pet hates is nested tabbed windows.  I know Mr Cooper (the author of the fine About Face series of books) has covered these types of bad habits before. I find such a GUI style leads to me ultimately getting lost in nested windows.  It’s very difficult to discover or locate functionality.

Microsoft Test Manager excels at not only nested tabbed windows that makes using this tool difficult, but also because it attempts to realise an inductive user interface design (an oxymoron with nested tabs) and fails.

Is that a Title Bar or a Menu?

I was reading an article somewhere on the net when someone mentioned Microsoft Lab Manager – an included tool with Test Manager. Apparently this tool was for running tests on applications particularly those running in virtual machines.  Lab Manager? Where was that? I’d not seen it.  I looked at Test Manager again, here is the upper left of the window:

NewImage

Well nothing conspicuous there, some navigation buttons and a window title…or was it? Lets expand out the view:

NewImage

I’d never noticed that inverted triangle widget between “Testing Center” and “Plan“, clicking on it displays a drop-down for the Lab:

NewImage

Clicking on it makes the application switch to the Lab Center.  Now this brakes several important rules about GUI design.

  1. Menus should never be bang menus – that is they should not act like buttons
  2. All buttons should be captioned with verbs not captioned with the current state.  Many people fall into this trap with single toggle buttons that toggle application state but also toggle the label with “Started” when the application is in a running state; “Stopped” when. This “Testing Center” widget here is in effect a toggle button that is displaying state. Instead it should be a menu that says “Mode” with a good old-fashing ticked “Testing Center” or “Lab Center” as appropriate.  Just like clicking “Start” to shutdown windows, clicking “Testing Center” to switch to “Lab Center” makes no sense
  3. It is not immediately apparent that this is a clickable widget to begin with. Silly people like me mistake it for a window’s title. The inverted triangle should be closer

Lab mode looks like this, I particularly liked how it suddenly seemed empathic towards me by turning green. And no, it wasn’t the result of “envy”.

NewImage

It is perhaps an amateurish piece of software, design-wise at least; I wouldn’t be surprised it was designed by upstart, Windows Presentation Foundation (WPF) developers with little or no design experience.  It’s like:

“…hey! we have this kewl new WPF tech. let’s put it in the testing suite of TFS 2010…”.

This is arguably a formula for a disaster.  Just because you can do a thing does not mean you should.  Take Windows Phone 7. It was done in Silverlight with the XNA team but they forgot some of the most important things in a smartphone such as the ability to copy and paste.  Amateurs.  I pitty the ex PocketPC and Phone Mobile 6 developers at Microsoft.

Another dreadful feature of Test Manager is that of course you can not for the most part look, edit or review items that are defined in Lab mode or vice versa – you must switch modes first.

Oh did I mention there are no popup windows either?  Too bad if you need to look at two things at once.

Amateurish!

 

WCF and nHibernate Can Live Together in Peace

WCF and nHibernate are two rather useful technologies for creating say SOA Entity Services[1].  WCF, particularly in .NET 4 makes it extremely easy for authoring a communications conduit whilst nHibernate unquestionably does to DB access as WCF does to communications.  Whether or not nHibernate has a place in large scale enterprise is a topic for another day.

I recently came across some peer code which cleverly defined a custom WCF service behaviour for initialising nHiberate when the service is first accessed and then for each method call.  It made good use of  the ability to utilise .NET attributes applied to the service class so developers in effect merely had to add one line of code to turn a WCF service into a DB-aware one. Quite well done I thought.

A problem occurred though when one wanted to just see if the service was published correctly by browsing to the .svc file in a web browser.  Instead of the usual WCF greeting displaying pseudo client code in a web page, an error and stack dump was displayed mentioning something about nHibernate.

Unfortunately, the mere presence of the class-level attribute meant that the nHibernate custom service behaviour was being called and I had not setup my database or some such.  Interestingly the error occurs even though we were not invoking any methods.  This meant it was technically difficult to verify the successful publication or hosting of the WCF service.  More disturbing was that it now broke all my MS Test tests for instantiating and invoking my WCF service.  Needless to say TeamCity was now displaying lots of red lights.

We discussed this and came up with an idea of moving the nhibernate service behaviour into the service configuration (in this case the web.config file) rather than hard code it explicitly with attributes or inline code.  I’m a great fan of defining behaviour by configuration irrespective of what technology the config is persisted as.  The beauty of this approach is that we can take advantage of web.config transforms for each build configuration.  Configurations for build, release, with DB and without.  Just perfect for continuous integration testing of the service itself but also deployed testing between client and service.

WCF allows you to define your own custom service behaviours and register them in the config file. This can be done manually by web.config heroes or via the magnificent WCF Service Configuration Editor.

First you need to define an Element Extension. Basically this represents the XML node in the config file. It acts as a moniker for creating an instance of your extension.  It is this moniker that the Service Config editor detects and not the behaviour extension itself.

public class NHibernateServiceBehaviourExtensionElement : BehaviorExtensionElement

{

protected override object CreateBehavior()

{

return new MyNHibernateServiceBehaviour();

}

public override Type BehaviorType

{

get { return typeof(MyNHibernateServiceBehaviour); }

}

}

Then to define the actual extension:

public class MyNHibernateServiceBehaviour : Attribute, IServiceBehavior { … }

Put both of these into ideally the same assembly.  Because of the way WCF and the Service Editor works the assembly will need to be in the same path.  This is arguably not feasible for all occasions so we elected to place them in the GAC.  Luckily nhibernate, log4net, hibernatinglamas (or was that rhinos) are all strong named.

With that done, I recommend that you use the WCF Service Config tool to do the next bit.  After all, if this part fails then it will most likely will in production.

NewImage

Save that to your typical config transforms and then it keeps everyone happy, even grumpy slappers of nHibernate such as me. 😉  This approach will allow you to have full debug WCF-only configs for local testing; full debug WCF+DB testing;  CI WCF only and CI WCF+DB.  Configs for all occasions.

Happy happy joy joy!

——————————–
[1] Thomas Erl, Patterns of SOA Design

Apple iLife ’11 is Amazing

Every new Mac now comes with iLife ’11 which includes iPhoto, iMovie, GarageBand amongst others. I remember looking at these apps particularly iPhoto and sort of dismissed them as entry-level app fillers.  In hindsight this was perhaps premature and I suspect it is because of us Windows users being so used to the over-the-top visual noise one is subjected to from Microsoft that unless the screen is filled with absolute nonsense that the app must be somehow sub-standard. This formula arguably shouldn’t be applied to a Mac. I’ve come to realise that Mac apps follow a no-nonsense style to UI design that Alan Cooper would arguably would be proud of.  Incidentally, running Microsoft Word on a Mac is a prime example of Sesame Street’s  “One of these kids is doing his own thing” that makes Word look so out of place just as a donkey would be in the Melbourne Cup.

iLife ’11

The reason for my change of heart was last nights viewing of the Apple Special Event, October 2010 New MacBook Air, Mac OS X Lion and iLife ’11.  In it, the demos of iMovie, GarageBand and iPhoto just blew me away.

Looking back, it’s quite clear that Windows Live Essentials is Microsoft’s very “version 1” attempt at trying to copy Apple.  I’ll leave it to you to determine the winner here.

Professional Productions Made Easy

Regardless of media – whether you are working on images, video or music, iLife can in a few mouse clicks allow you to make wonderful geographically-linked photo slideshows, hard-bound books; video trailers, facial recognition from video; and the brilliant “spell-checker for music” rhythm/groove fixer.

If you have not seen the above mentioned Apple Special Event video on iTunes, I highly recommend you check it out.

User Interface Design for Voice Command Accessibility

Now that I’ve been playing around with voice dictation and voice commands for a few weeks (whether it is the built-in Windows, Vista+ feature or via third party such as Dragon NaturallySpeaking) I find that you must give a different prefix to commands depending upon what sort of application it is – rich or lightweight that he is say a native Windows application or something in a Web browser to select whether it is Internet Explorer, Opera, Firefox or whatever.  Let me explain:

For humble experienced users such as myself this complex change or the way that I issue the command can be rather frustrating because I must remember what type of application when using pen also the way I speak to the computer accordingly.

 

…they should not really care whether the thing that is in front of them is a rich application or something that is in a web browser…

 

If I’m just performing dictation (for example dictating my name, address, or the name of the document)  into some form of text box or moving the selection of a listbox up-and-down then it doesn’t really matter whether I’m using Windows file Explorer, Microsoft Internet Internet Explorer, Firefox or Opera I can dictate to all of these types of applications without changing the way I do it, which is good.

 

Local Rich Applications

However, if I wish to click a button, click menu items, toolbar buttons or select a hyperlink (whether it is something on a webpage or a link in a local rich applications legacy pre-HTML help file) in the process of something like this:

(Speak – you dont actually say the word speak) xxxxx” where xxxx is selecting items in the Start Menu, menu items, folders, toolbar buttons, text fields and labels. e.g. “my computer”,  “pictures”, “tools”, “options”, “bold”, “ok”, “cancel” .

In other words, you just say the name of the control itself (which generally matches the tooltip)

Web Applications

However for any web page regardless of browser, it seems this is the following approach.  Users must use:

click OK

click Cancel

click Mozilla

click first name / click textbox…[1,2,3,4] ( as in the case of Dragon)

Sadly, if the webpage doesn’t use your one of the mill Web controls, then you are out of luck and controlling these types of controls requires much more dexterity.

Summary

This sort of thing is rather confusing, and will no doubt likely confuse users because at the end of the day when sitting in front of a computer, they should not really care whether the thing that is in front of them is a rich application or something that is in a web browserthey both are running on the same monitor, how they are presented to the user is irrelevant and is perhaps arrogant to force the user to adapt to the underlying technology involved when presenting the information to the user.

Don’t Touch or You’ll Break It!

XNA is broken!…there I said it.

Like many seasoned developers, I’m fairly sure I have used quite a few software development APIs, class libraries and frameworks.  Some were rather useful in their approach to providing extensive features but with the minimal amount of effort required by ones’ behalf (Windows Forms via .NET) whilst others, though very powerful, required you to tinker quite a bit before you could actually do anything with it (Win32 GDI via c/c++ – pre MFC ).  Though these two examples of APIs are vastly different in terms of user-friendliness and skill set, they do share one thing in common – arguably very little has changed in APIs over the years that would cause your application to break or at least require a major rewrite, assuming of course your application was implemented as per guidelines too.  Unless of course you were unfortunate enough to be using Borland c++ and their odd message pump model.

e.g.  a well thought out API together with skilled application developers much of the following were incident free:

  • Moving a 16-bit c/c++ Windows 3.x application to 32 bit Windows 95 (new helper macros were introduced prior to Win95).  Generally, the majority of issues in c/c++ was that newer versions of the c/c++ compiler default to newly-introduced compiler warnings regarding the language itself rather than a reflection of the API.
  • .NET 1 apps to .NET 2 then finally to .NET 4 64 bit!  Conversion is merely loading it up in the new version of Visual Studio.  What could be more easy?

Some of the best APIs are the ones that are well designed from day one.  Though many pubescent up-start programmers poo-poo’d the MFC framework, particularly the document-view model, I rather liked how, at the time, the same code when run many years later could take advantage of document persistence over the Internet without major changes to your code!  Back to APIs, generally if a new API is released you don’t want to break your apps and as this example shows its nice when client code gets a free set of steak knives over timeBlack-box technologies such as COM and OLE Automation objects are some of the best ways to enforce this.  Because no actual classes are exposed but rather contracts in the form of interfaces, a developer can usually be confident that future versions won’t break mainly because COM technologies allow for side-by-side deployment of APIs.  Of course it is up to the API writer to follow this pattern or at least follow the golden rule of interface contracts – a previously published interface should never be changed! Whack your new methods and properties in new contract and expose that in addition to the prior one.

e.g. of some great COM and/or OLE Automation APIs that proved their resilience:

  • OLEDB and ActiveX Data Objects (ADO)
  • DirectX
  • Windows Shell

So what’s my beef with XNA?

In case you didn’t know, XNA is a .NET wrapper of sorts around DirectX for 2D and 3D graphics amongst other things.  It is also a framework as opposed to a class library not that that is the issue.  Well, we .NET developers have in my opinion (which goes without saying since this is a blog) been subjected to what it seems to be an apparent lack of long-term vision with regard to developers by the makers of all things .NET-DirectX related.     The current stable release is XNA 3 though there is a XNA 4 CTP.  Prior to XNA 1 there was a beast called Managed DirectX (MDX).  MDX was rather good, you could actually have a 3D viewport in .NET Windows Forms application! Marvellous! It was infinitely easier than c++ COM-based DirectX even to COM-experts like lil-old-me.  The only bad thing about MDX was that anyone with an inkling of framework deign knows you never expose classes! [2]  Where the devil were all the interfaces?!  I fear a storm is coming.

Fig 1. Ahh…the good old days of MDX, where one could actually render 3D in a .NET Windows Forms window.

Sure enough a storm did come in the form of XNA V1 – a replacement for MDX and MDX was never spoken about again at dinner parties.  Not only could we no longer render inside say a .NET Windows Forms window[1] (boo!!) but the API was completely different (boo ^ 2).  Time to rewrite your app!  To make matters worse it was still a silly class-based framework.   Like the oil spill in the gulf, no one seemed to care and things just got worse.

XNA 2 came and sure enough it broke your app, not because of the use of some undocumented feature, but pretty much because the primary API changed.  Similarly XNA 3 was just a big a bully.

Now we have XNA 4 CTP.  Having heard of some exciting titbits about the new version, I quickly installed it; created a new project in Visual Studio 2010; grabbed myself a nice cup of tea and proceeded to attempt to draw a series dots on the screen with code that was familiar to me from XNA 3.  Alas, it seems that such tasks are too far beneath or cumbersome for modern GPUs to be interested in for I was rudely informed in a nice shade of red lettering that PointList is now politically incorrect.  Maybe if I put in 8 video cards or sacrifice a gerbil or two it will let me? One helpful chap in the XNA forums even said that:

“…Points and point sprites are just different words for the same thing…You can draw single pixel dots using SpriteBatch…” [3]

Hmmm, I admit I’m no XNA expert but surely the XNA 4 framework could have merely exposed the same contract as XNA 3 and performed the above for us behind the scenes?  This gets back to what I was saying about introducing new framework versions in general. By all means publish a new contract but don’t touch the old one.  Merely redirect it internally to achieve the same logical behaviour as the previous framework version.   If its a performance thing, then I’d would have thought that makers of GPUs concentrate on trying to implement realistic and fast shadowing technologies.  Just how often does one render a single pixel anyway for this feature to have been dropped because of a rumoured GPU performance penalty?  Perhaps this is a plot to finally whack a nail in the coffin for us would be space-sim developers?

Anyway, more red lettering was to follow.  Drawing sprites, setting up anti-aliasing, preparing effects, saving states are all completely different now so like back in the Autumn days of MDX, its time to throw away all your XNA 1, 2, and 3 books because they are about as useful as books on ODBC.

Looking back over the years, it seems that perhaps XNA has taken a very agile or scrum-based approach (not necessarily a bad thing) with little thought of the future which can be disastrous when one is making a software framework which will be utilised by third parties!   By all means use agile or scrum, just don’t be schizophrenic and change your mind all the time with regards to the API.

I wonder how OpenGL programmers have been treated?  Perhaps it’s time to start programming for the iPhone and iPad.

As one famous lady once said:

image

Not happy Jan!


[1] apparently the reason was because to support Xbox 360.  When you consider that one must explicitly compile for 360 and Windows anyway and the binaries are not compatible, allowing for rendering inside a form on Windows should have been trivial one would have thought.  And before you say message pump I’ll just say one can play back a video in a Window so what’s the difference?  QED.

[2] It’s actually interesting to see that .NET is class-based and yet does not suffer anywhere near the problems that MDX-XNA has had in its short life.  Then again c++ allowed the same thing – it just required discipline.

[3] XNA Creators Club Forums

“Not happy Jan” is copyright © Yellow Pages Australia, Sensis, Telstra.  All rights reserved. Used without permission.

The Emperor’s New Clothes – Affects Developers Too

…Two weavers who promise an Emperor a new suit of clothes invisible to those unfit for their positions or incompetent. When the Emperor parades before his subjects in his new clothes, [only] a child cries out, "But he isn’t wearing anything at all!".  [Everyone else was was too scared to say anything.] [3]

emperor_21702_lg

Something that I have noticed about the software development industry is that people will go to extraordinary lengths to author the most well structured software system.  This includes the utilisation of wonderful design patterns from the original Gang of Four in addition to new contemporary additions; the surprising new stage of developers not only adding test cases but also creating them themselves in the form of NUnit or MSTest and adding them to their project; the use of object-relational mapping (ORM) technologies and more recently, the use of something supposedly new called inversion of control [+ dependency injection] (IoC DI)[1].

The consequence of all of this leads to not just an improved working experience for the developer, but also creates software that is better structured, designed, loosely-coupled and obviously testable.  Developers have been wonderful in their approach to ‘let’s stop for a minute and do this better’ mentality and history has shown that all these new tactics have been quite successful leading to well-rounded reliable software.

However, in order for many of the above mentioned technologies/patterns/practises to work, generally requires developers to resort to doing something quite disturbing, something that requires complicated procedures; something that is error prone; something that can be a security risk; and in some cases something that can lead to application instability.

I speak of course is the practice of developers hand-editing XML files for the purpose of configuring a technology or 3rd party system that is either a framework or library.  XML when used right can be rather wonderful, I fully support it for scenarios such as data interoperability, transformations, SOAP messaging and so on.  I have no problem with a configuration tool or system saving application settings to XML, just so long the user experience is not hand-edited-XML-first.  Contrary to widespread belief, XML is not human readable, I doubt your English-speaking grandmother will be able to decipher it.  Nor is XML self-correcting – just try deleting a ‘>’ and see what happens.  This is why it should never be used for manual editing regardless of skill, I doubt anyone can claim 100% hand-editing without errors.

Libraries such as NHibernate require configuration before it can be used – fair enough, but to force developers to manually tweak XML configuration files is not only time consuming it is also error prone.  I believe there are now tools to generate the XML files.

IoC systems such as StructureMap also require configuration. Most uses I have seen is via hand-edited XML (though I believe runtime calls are also available). The issue here is that configuration for SM is subject to be:

  • error-prone
  • type-safety concerns
  • tedious
  • can lead to application instability[2]
  • can lead to security issues[2]

Then there are the technologies that take XML to heart literally, the authors being fan boys of XML so much they include it in the name of their system. The bizarre installer technology Wix and Microsoft’s XAML are two examples.   Wix offers developers questionable value whilst at the same time opens up a whole new set of problems for installer developers:

  • error-prone
  • lack of a rich developer experience
  • tedious
  • significant increase in time investment for making an installer

Shame on you Visual Studio users, have you not seen the Setup project wizards?  Click a few buttons and away you go. You can even do custom build actions in type-safe .NET should the need arise!

Don’t get me started on XAML. Whoever thought programming in a data-structure file format was a good idea has issues. Hand-edited XSLT is confusing enough.  I get the impression that Microsoft wanted to release an alternative to Windows Forms development but instead of providing a proper editor they just deployed XAML and said “…there you go! have fun!”. we certainly got fooled into thinking hand-editing XAML is pretty neat.

Personally I feel Wix and XAML share many traits, they both represent a backward step for the developer experience; crazy programming in a data-structure mentality; and also highlight a lack of a good IDE tool.

On the subject of Microsoft, their original sin was to allow or to give the impression to developers that the creation, deployment and hand-editing of xxx.exe.config files is the norm.  This practice remains to this day even surprisingly after Vista Development Guidelines stipulate that no files should be written to in .\Program Files by the application after the application has been installed (ignoring patches).  So you shouldn’t be updating xxx.exe.config files there.  Yeah I know there is the file virtualisation thing, but then again I can call WriteProfileString to write to the Win.ini file.  It does not mean that because you can do something, that you should do something.

Microsoft followed this up by allowing .NET Remoting and later WCF configuration to be, by default, persisted to the application’s xxx.exe.config file.  Again it’s bad form (see my prior post) because a user can easily change application settings that can have huge ramifications on your application. Case in point – WCF allows you to specify a transport provider as well as transport attributes. A user can change your finely-tuned, well tested and approved TCP/IP binary, no encryption, SOAP message settings to say https and now suddenly your application’s message size has grown considerably.  Not only that, because the user indicated either consciously or though accident WS-R, it’s no longer a one way message but perhaps up to four or so!

The innocent editing of a plain-view configuration file can have major ramifications for not just application behaviour but also application performance!  A GUI Configuration tool provides a rich user experience – it warns the user; it usually has some form of online help.  A XML config file does not!

Some of the technologies I mentioned have tried to clean their act up, unfortunately, the damage has been done perhaps regardless of the good intentions of follow-up tools in reducing the XML hand-editing.    However, people continue to hand-edit XML even with the knowledge of richer API counter-parts or the use of additional helper tools.

So I think or perhaps hope that developers are aware that providing complex configuration via hand-edited XML or to program in XML-related data structures is not really ideal, it’s a bit like the Hans Christian Andersen story – The Emperor’s New Clothes.

 

Most of us know its not really a good idea but we don’t do anything about it.

Until next time…

—————————————————-
[1] IoC DP is nothing new, anyone familiar with COM Shell Namespace Extensions plug-ins development (that has pretty much not changed since Windows 95) will see similarities. Microsoft Management Console snap-ins is another example. I’m quite sure avid readers will find other examples that predate my computer lifetime.  IoC incidentally merely means the API calls into your code, a button click event is a good example.

[2] because types and methods must generally be exposed publically for SM IoCDP to work, together that most configuration is via a XML config file, a reasonably skilled malicious user can alter your config to call a different type or member thus changing the behaviour of your system in an unforseen way. e.g. You might have accidently exposed a type or method publically thus allowing the malicious user to use that type instead. This type may offer similar functionality but could lead to a drastic outcome.  Perhaps this type was for testing purposes only.

[3] wikipedia – http://en.wikipedia.org/wiki/The_Emperor%27s_New_Clothes