2012-04-02

Few things about MS QualityTools internals in VS2010

As I was previously searching for the fix for IMessageSink problem, I was tempted also to check, why the runner does not respond properly when commands like "Run tests in current context" or "Run tests in current class" were not handled properly. It was very strange, because looking at the code of the runner, everything seemed in a perfect order. The tests were discovered, then were run, the results were displayed - although in somewhat distorted way: the Owner column was holding the class name, the ClassName column was empty, the FailureCount was not recorded at all, etc.

It was obvious that something was not right. The case of Owner was trivial - even the original author warned that he couldn't get the ClassName filled properly, so he artificially injected the name of the type to the Owner, just to be able to see, group, and sort the tests.

I immediatelly thought, that maybe the Visual Studio is relying on that very column, the ClassName, to navigate between the test and the code, and maybe it is some partial cause why the "Run in current context" does not work.

Overview

The code package responsible for handling the tests is the Microsoft.VisualStudio.QualityTools. It dictates the architecture of the test plugins, so for the xUnit test runner, too:

  • a class derived from Package, that defines the plugin for VS
  • a number of classes derived from TestElement, that will work both as custom test definitions and as test instances
  • a number of classes derived from Tip, that is mostly responsible for test discovery
  • a number of classes derived from BaseTuip, which actually are windows/panels, used for example to display custom test results
  • and so on..

The QualityTools package also provides many ready-to-run features for the MSUnit library, and this is why MSUnit-based test are seamlessly integrated and beautifully run from within VisualStudio immediatelly after installation.

The original xUnit provided all of the above and was acting as a bridge between the QualityTools architecture and the xUnit actual test discoverer and actual test runner, which are provided by the xUnit itself. It also defined services and a panel/window class, but they were not really used (correct me if I'm wrong). The runner uses a tricky exploit: although it defines a custom TestElement and handles the run on its own, it does pretty nothing to display the custom results obtained. Instead, upon completion of the test, it translates a series of recorded xUnit's TestResult objects into a QualityTools' UnitTestResult objects of a proper kind, and returns them instead. The UnitTestResult is already paired with an existing MSTest result's viewer. This way, the results are displayed as if they were generated by the MSTest unit tests.

It sounds easy, but keep in mind the fact that UnitTestResult is an internal class. There's no easy nor pretty way to do it. The actual implementation of this employs some "dirty" tricks with Reflection and Linq. They are interesting, but outside of this article's scope.

The TIP: Discovery of Test Elements

As I mentioned, the Tip object (that the Package should register) is responsible for test discovery. It is done by overloading the base Load method. This method is provided an assembly name, some project information and is assumed to return a collection of test definitions, the TestElements.

One may notice that the Tip constructor is given an instance of ITmi object. I suppose that this name is a shorthand for something like test management interface. In fact, this the outer brain and the core element of the QualityTools package. The TIP that a test plugin should implement is a merely a plugin for that TMI object. This ITmi interface defines a dozen of very handy methods, if ever someone would have to inspect the test lists manually.

For notable example, whenever a test project is rebuilt, the Tmi.LoadTests method is called, which in turn calls LoadTestsFromTipsHelper, what invokes the Load methods from all registered Tip instances.

I've noticed that the TestElement base class defines a virtual FillDataRow method. I instantly thought that this is the point where the ClassName should be filled, and tried to do that - it was impossible. The rows to be filled simply did not contain such column!

If I recall correctly, when the TMI is first initialized, it creates a dummy TestElement instance, inspects it for displayable properties, and then creates initial columns for them. This is a closed mechanism, it cannot be extended. Later, when new TestElements are loaded, they are being displayed on that very same tables, and only that base columns are possible to be filled.

FYI: The core method of there mechanisms is AddRowAndColumnsToTable, and the initial columns are initialized as follows:

Type of the table to display      the source of the column definitions
Tmi.StorageElementType.Test <- (IVisiblePropertyProvider)dummyTestElement
Tmi.StorageElementType.Category <- (IVisiblePropertyProvider)testListCategory
Tmi.StorageElementType.RunConfig <- (IVisiblePropertyProvider)runConfiguration
Tmi.StorageElementType.ResultCategory <- (IVisiblePropertyProvider)testListCategory

Interestingly, at that point, there are no columns like 'Namespace', 'ClassName', 'StackTrace' etc. Please note that all of this occurs when the test project is built. Noone really wanted anything to be displayed yet!

Thus, another very interesting internal workflow should be the initialization of the test list window. The ControllerProxy.InitializeTestRun invokes a very important Tmi.CreateInitialNotRunResults. This method enumerates all TestElements and inspects their corresponding Tips in order to determine what displayable columns it should prepare. For each Tip, an empty dummy Common.TestResult.TestResult (the base class of all test results) is created and provided to a MergeResults of the Tip.

This is the second most important method after the Load. It is responsible for gathering partial results into a one final TestResult, and is meant to return a proprietary result subclass, relevant to the actual test type in question. Thus, in contrast to the past initialization, now the TMI inspects not the dummy result object, but it checks instead the object returned from MergeResults, iterates over its VisibleProperties and creates new columns if necessary.

It may be noteworthy, that the VisibleProperties go through a small filtering. The method VisualPropertyObtainer.IsSupportedByProduct tests for any interfering licensing attritubes that could have been applied to the test plugin.

This process of building the columns of the test view is important, because this is what initiated my analysis: the ClassName column was empty. Please remember that the original code of the xUnit runner was using a trick and it generated original internal UnitTestResults. However, it did not implement the MergeResults method in its Tip, and did not return a correct object (it used a base implementation!), thus the test view did not knew that it needed to inject data columns for such object!

Seeing this, the solution was immediate: I've played with the VisibleProperties, FillDataRow, MergeResults and also thrown my own TestResult subclass instead of hacking the internal one - and here there is! The 'ClassName' column got filled properly. Moreover, it seemed to be possible to completely control what and where as displayed.

However, the 'Run from Context' was still not working. The VS QualityTools is not using information from test list rows to navigate from the code to the tests.

The WTF

Aside from the test discovery and test list building, there are many other interesting workflows inside, for example, the Controller.ControllerPluginManager.LoadPlugin which inspects TestElement.ControllerPlugin property. Actually, I had no time to check it, but it looks very interesting, as the 'Controller' is in hierarchy a little above the internal TMI object :)

The TIP is contains some extensible parts, but actually, and pitifully, most of the building blocks are in fact internal/sealed/closed and unmodifiable. One of such is the ... yep, the code-to-test mapping.

And what's more, it is essentially broken, at least, from my point of view, as a plugin author.

When the user presses 'Run from Context' button or menu item, a command of the same name is run, bound to the QualityToolsPackage.OnRunTestsFromContext[1], which in turn relays much of the initial work to the CodeModelHelper.FindActiveCodeElement[2]. The latter method investigates the current text selection, and returns a CodeElement that most closely relates to it. If no actual "text selection" is made by the user, then the cursor position is used. This method works properly, and the OnRunTestsFromContext receives the code element object, checks the mapping and sometimes rebuilds the projects on the fly, and finally passes the code element to another hard-working method, the QualityToolsPackage.GetTestIdsFromCodeElement. This method is the core reason of all problems with Run from context.


[1] QualityToolsPackage is in Microsoft.VisualStudio.QualityTools.TestCaseManagement.dll

[2] CodeModelHelper is in Microsoft.VisualStudio.QualityTools.CMI.dll

The signature of QualityToolsPackage.GetTestIdsFromCodeElement has a few important parts:

  • parameter: context, enum of type QualityToolsPackage.RunTestsContext
  • parameter: element, the code element that defines the 'position'
  • out parameter: runAllTests, bool
  • return: List, the tests matched to the context

The enum is defined as {Default,Disabled,Class,Namespace,All} and it is a way to provide an abstraction level over all the 'Run from ...' commands and allows code to be reused for all of them, at least:

  • "Run from context" -> Default
  • "Run from namespace" -> Namespace
  • "Run from class" -> Class

The element parameter is inspected for the 'ElementType', which is an another enum, namely CodeModelHelpers.CodeElementType, defined as {Assembly,Namespace,Class,Member}. This presents all of the 'scopes' that the test runner may distinguish at the cursor position.

The runAllTests out-parameter is a flag, defaulted to true, that is cleared only if at least one valid TestElement is found during the search. This why the test runners really do run all the tests in the solution instead of the ones the user wanted to run contextually. This is a brutal fallback, apparently implemented just with the user-friendliness in mind - the implementor probably didn't want to irritate the user with messages "sorry, I did not know what Context you mean".

The GetTestIdsFromCodeElement switches over the ElementType and performs following steps:

  • Assembly:
    • simply returns nothing; runAllTests is set to true, so the engine runs all tests from the assembly.
  • Namespace:
    • reads the namespace from the code element, and immediatelly defaults to a test search loop
  • Class:
    • generates a GUID based on a seed calculated from a string of Namespace+"."+ClassName
    • asks the TMI to find a TestElement with TestID exactly equal to the just generated GUID
    • if a test is found, returns it as the result
    • if not, records the Namespace and ClassName and defaults to a test search loop
  • Member:
    • generates a GUID based on a seed calculated from a string of element.FullName
    • asks the TMI to find a TestElement with TestID exactly equal to the just generated GUID
    • if a test is found, returns it as the result
    • if not, records the Namespace and ClassName and defaults to a test search loop

The steps above are simplified a bit for readability. The 'Context' parameter must be taken into consideration, too:

  • The hashing is performed only when the Context == Default; in other cases only the search loop is executed
  • The ClassName is recorded only if the Context != Namespace; thus, if the user wanted 'Run from Namespace', all the finegrained searches are skipped

Assuming that none of the quick solutions didn't succeed, a search loop is performed. Actually, there are two: one for the class scope, and one for namespace scope, but they are almost identical. The algorithm is as follows:

var found = new List<testid>();
if (!string.IsNullOrEmpty(className))
  foreach (ITestElement testElement in tmi.GetTests())
  {
    UnitTestElement unitTestElement = testElement as UnitTestElement;
    if (unitTestElement != null
      && unitTestElement.ClassName.Equals(str1, StringComparison.Ordinal)
      && unitTestElement.Namespace.Equals(str2, StringComparison.Ordinal))
    {
      runAllTests = false;
      if (testElement.Enabled)
        found.Add(testElement.Id);
    }
  }

While this is absolutely correct from the implementational point of view, please note the cast to UnitTestElement. This class is internal and belongs to the QualityTools and MSUnit. The properties ClassName and Namespace are not defined in the base TestElement class, because that class is on a higher abstraction level, and it may be used for test that have no such notions. Therefore, an interface IUnitTestInformation has been introduced, and this interface defines the code location properties: FullName, Namespace, ClassName and MethodName. However, the implementor accessed the properties not via the interface, but by direct cast to the internal class, and that essentially cancels any our further attempts to enhance our TestElement implementations.

Please note that your custom TIP actually can perfectly mimic the hashing algorithm and then it will properly "run-from-context" a single test method or test propery, but still it will fail when asked to run from a class context or namespace context, as they are solely handled by the search loops.

As a side note: the IUnitTestInformation interface is internal, too. I know it's April's Fools today, but I'm not joking. They literally started with a very extensible architecture, just to shut the most useful bits away in the final lines, at least from unit testing point of view. While conspiracy lovers surely can notice here an ill marketing attempt to promote MSUnit over plugins, I call it a bug.

The solution

Part of the solution was already in place. I've already mentioned that the original author of the plugin found a shortcut, and instead of implementing the TestResults and results viewer windows, just properly translated the results into the internal objects.

So, why not do the same now?

It turns out that the test package registration really requires to register a new test type (or else you will not be able to register your new TIP instance), it actually completely does not care whether the registered TIP returns tests of that type, or any other type! Let me say that again: the custom TIP may return whatever TestElements it likes, with no respect to the registered test type. Please note that the TestElement object has a TestType property, and the TMI.GetTip method uses that to obtain a TIP for a test. That means, that if our custom TIP generates some TestElements with a TestType pointing to an another TIP - then the tests will be handled and processed by that another TIP. This a very similar mechanism to the Adapter property.

That means, that if the TIP implementors want a quick and seamless 'Run from Context' integration, they should abandon implementing their own TestElement subclasses at all. Let your Tip.Load return the original internal UnitTestElements and everything automagically starts working properly. Yeah, more dirty Reflection work.

The constructor of the UnitTestElement is very simple (much simplier that those from UnitTestResult class), but requires another internal class, the TestMethod (not to be mistaken with the TestMethod form xUnit library - although they are almost identical!). Aside from that, fortunatelly, there are almost no caveats related to manually constructing such objects.

Except for three: - the constructor does not set the CodeBase property - and neither the Storage property - nor the ProjectData property!

Of course, all of them must be set up, or else the TestElement.IsValid will turn to false and the TMI will ignore that test. Both the Storage and ProjectData are defined in the base TestElement class and are easily accessible, but the CodeBase is defined by the internal UnitTestResult and can be set only via Reflection. The last thing to note is that setting there properties manually causes the IsModified flag to be lit on the TestElement, thus it should be manually cleared afterwards. The original code from QualityTools does it just the same way:) Just look at the end of

Microsoft.VisualStudio.TestTools.TestTypes.Unit.VSTypeEnumerator.AddTest

By the way, this method is a beautiful reference on how to spoof, erm I mean, setup a UnitTestElement instance. In this method you will find all the details about the meaning of various method- and class attributes that can be defined over a MSUnit test, and how they are mapped to the UnitTestElements configuration. This method handles the .Net, .Net CF, and ASP.Net, so it really is worth a look.

Another attempt way to support the 'Run from Context' properly could be done, as one can try to implement own 'Run from Context' command handler. I have not tried it, as I see it completely insane, considering the amount of additional work. Also, I suppose that the original handlers would have to unregistered first, and that may be a little tough. If anyone needs that, here's the registration of the original handlers:

// from  the class QualityToolsPackage
CommandHelper.AddCommand(this.m_menuService, new EventHandler(this.OnRunTestsFromContext), new EventHandler(this.QueryStatusRunTestsFromContext), VSEqt.Commands.RunTestsFromContext1);
CommandHelper.AddCommand(this.m_menuService, new EventHandler(this.OnRunTestsFromContext), new EventHandler(this.QueryStatusRunTestsFromContext), VSEqt.Commands.DebugTestsFromContext);
CommandHelper.AddCommand(this.m_menuService, new EventHandler(this.OnRunTestsFromContext), new EventHandler(this.QueryStatusRunTestsFromContext), VSEqt.Commands.RunTestsInClass);
CommandHelper.AddCommand(this.m_menuService, new EventHandler(this.OnRunTestsFromContext), new EventHandler(this.QueryStatusRunTestsFromContext), VSEqt.Commands.DebugTestsInClass);
CommandHelper.AddCommand(this.m_menuService, new EventHandler(this.OnRunTestsFromContext), new EventHandler(this.QueryStatusRunTestsFromContext), VSEqt.Commands.RunTestsInNamespace);
CommandHelper.AddCommand(this.m_menuService, new EventHandler(this.OnRunTestsFromContext), new EventHandler(this.QueryStatusRunTestsFromContext), VSEqt.Commands.DebugTestsInNamespace);
CommandHelper.AddCommand(this.m_menuService, new EventHandler(this.OnRunTestsFromContext), new EventHandler(this.QueryStatusRunTestsFromContext), VSEqt.Commands.DebugAllTests);
CommandHelper.AddCommand(this.m_menuService, new EventHandler(this.OnRunTestsFromContext), new EventHandler(this.QueryStatusRunTestsFromContext), VSEqt.Commands.RunAllTests);
// and a bit later:
MenuCommand command1 = (MenuCommand) new OleMenuCommand(new EventHandler(this.OnRunTestsFromContext), (EventHandler) delegate {}, new EventHandler(this.QueryStatusRunTestsFromContext), VSEqt.Commands.RunTestsFromContext2);
command1.Enabled = true;
this.m_menuService.AddCommand(command1);

The cherry on the cake: a better Runner

Of course, I've incorporated the solution to the xUnit test runner, and it works great! Except for the Reflection code, the whole runner has simplified significantly, and quite large sections of the code were removed. Noticeably, the window, service interface and the TUIP were removed completely. They seemed unused before, and now they were completely unused, as the pair of internal UnitTestElement/UnitTestResult simply uses the original UI from the QualityTools/MSUnit. I've refactored the Reflection code into a separate class InternalAccess.cs for easy copying, in case anyone wanted to incorporate the UnitTestElement/UnitTestResult generation into their own plugin (http://nunitforvs.codeplex.com/workitem/32394 maybe?).

Summarizing all that I've said - let's review what's left in the runner!

  • XUnitTestPackage, that registers a test type and the TIP
  • XUnitTestTip, paired to the test type, but ignoring it completely
  • XUnitTestAdapter
  • XUnitTestRunner
  • and a XUnitDummyTest
  • InternalAccess, for the Reflection stuff

The XUnitDummyTest is the previous XUnitTest, that now has lost all of its implementation. It is now just a hollow stub, just because the TIP registration needed it.

The XUnitTestPackage lost over 60% of its implementation, and now is a hollow class, existing solely for the purpose of having the [RegisterTestTypeNoEditor] over itself, just to register the TIP.

The XUnitTestTip now just asks the xUnit for a test list, and builds a list of UnitTestElements

The XUnitTestAdapter is almost unchanged, and it still relies on the XUnitTestRunner to execute the test, and later constructs the UnitTestResults from the actual results.

Also, the XUnitTestRunner has almost not changed at all. The only adjustment came from the removal of custom TestElement: a full class name must be known along with the name of the method to run, and they must be extracted via Reflection from the UnitTestElement that is now in use.

As far for now, except for the Reflection tunnels, the code is actually minute.

The current state of the plugin is currently available at:

https://github.com/quetzalcoatl/xvsr10

Still, please treat is with caution: while it runs beautifully on my machines, and the code is tiny and there aren't many klocs left for bugs, this still is a rather fresh 'product'.

No comments: