3.10.09

Eurotunnel...

The hiatus is over; in death is renewal and the realisation that we are indeed mortal.
One
Anyway back to the subject of this blog, software.

At present we face a number of software engineering challenges amongst these is the centralisation of field application logs(FAL). SaaS appears to be a natural solution to this problem. The service interface would ideally accept a log message and persist this for later consumption by development and support staff.

19.8.09

Some thoughts on home grown code gen

The context: I have some xml that needs to be converted to C# class files. The xml is attribute-centric.
Possible implementation: transform using xslt, tokenize xml and use tokens to populate template; use Codedom ap; use code gen tool
Preferred option: CodeDom/Tokenization strategy

Applying Scrum in a non-software development context

My experience of agile project management is confined to the field of software development, but as luck would have it, I have an opportunity to apply agile in an industry sector where brawn comes before brains. Well that's not entirely correct, but I guess this domain thrives on a command control type structure. I will definitely need to adapt the methodology to take into account the cultural differences afterall we are dealing with complex adaptive systems...ahem

Oh so this is SAP

Recently started working with SAP as a data model for our .NET UI. All very exciting at the moment. Objects are modelled using a tool and each instance of an object is stored in a construct called a table. Associations and aggregations are aslo modelled using the tool. As someone once said "it's dejavu all over again". Based on these observations, I posit that SAP development exemplifies DDD. The ubiquitous language is developed as the model is formulated, dependencies amongst objects are also modelled accordingly. What I am yet to encounter is the notion of value objects. Entities are an intrinsic part of object definition.  Blogged from me Android...

That's your spec sir...my first foray into the world of haptics

What we have here is a meal planner structure which can be digitised and form a module in our Home Information Management System(HIMS). In this era of the Smart Meter and the Media Center, there is definitely a place for HIMS.

26.7.09

220 Downloads later…

A while ago I put together a little library that determines internal rate of return for a series of cash flows. This is a problem in the accounting domain and was prompted by the inadequacy of the features in excel. I did not want to haul the whole of excel into the application I was building. I tend to stay clear of Interop if I can help it.

Well the short of it is that the lib averages 1.5 downloads per day. Not much for 15 mins work, but this has led me into thinking that perhaps there is a larger market for reusable math/scientific libraries. How about a DNA sequencing lib?

Now clean it up…

In our attempt to clean up software engineering and make it a more rigorous profession, I assert that testing is one tool that will ensure that the discipline produces consistent results and repeatable quality. Testing should be at the forefront of a developers’ mind, but in most cases it is an after thought. I suppose this is a reflection on the human condition. We are forever optimists hoping that our artefacts will stand the test of time.

Well I don’t live on hope and for me life is a set of precise constructs. Gravity always wins and code breaks. So lets get test-infected.

Writing a good tests assumes:

  • A knowledge of the domain or access to a subject matter expert – can be the product owner
  • An adequate tests framework
  • A testable architecture

As someone once said, “any one can write a programme, it takes a disciplined developer to write code with tests”.

10.7.09

Balancing the risks of tool switching – a heuristic

When/why should a team adopt/change its toolset?

  • If the current toolset is inadequate and results in a loss of productivity
  • If the new toolset is a ground-breaking innovation and not merely a derivation and solves the business problem better than the incumbent
  • If there are compliance/regulatory constraints that mandate the change – for example Sarbannes-Oxley.
  • If the new tools will give you significant competitive advantage in the market place, otherwise the costs of learning/deploying the tools will erode marginal benefits

When should a team not adopt/change its toolset

  • Because it will look good or make the developers appear smarter when included on their CVs – remember the customer has to deal with mess once you have left the site

The day we overloaded the scrum with issues that were not part of the “triumvirate”

The context: A distributed team with very minimal inter-personal contact. Most conversations are by electronic means.

The scrum started out with each engineer being asked the usual trio of questions. The expected responses were given and then a tangential conversation about the best approach to take for a new project took root. Big mistake! Before long we had a virtual bun-fight that risked creating permanent schism within the team.

The lesson to learn from this saga is that face-to-face sessions are best when discussing issues bordering on religion. Once a jihad has taken root within the team then delivery of quality software will be impossible because the team is working against itself.  As a facilitator it is tempting to give additional bandwidth to groaners and the loudest shouters, but this has to be weighed against the greater needs of the team to work as a cohesive unit.

1.7.09

We have survived the death march…what’s next

After the intense and somewhat heroic efforts by the team, a retrospective is definitely apropos. The key lessons learnt are:

  1. Scrum definitely works when used in a chaordic context. What might appear as the ritualized trio of questions, what did you do yesterday, what do you plan to do today, what are the impediments to progress, focuses the development team on what needs to get done.
  2. Working software is the ultimate measure of progress on any software project. If it’s buggy you will have trouble gaining the trust and required acceptance and signoff.
  3. Short feedback loops via daily stakeholder meetings help focus the development team on what needs to get done
  4. Having a developer who also happens to be a subject matter expert will propel the team further.
  5. Don’t change the domain model if it’s not broken. A distinction has to be made between purely technical changes and changing the semantics of the model.
  6. Regression issues will undermine the team and erode user confidence
  7. Make your tech choices before hand and adhere to them
  8. Beware of feature envy - http://c2.com/cgi/wiki/Wiki?FeatureEnvy
  9. Keep the testers close and the customers closer
  10. Retrospective testing of code is much harder than TDD.

11.6.09

A severe case of coders block

With an impossible deadline on my back and a tangle of excel spreadsheets as my spec, I punch the keyboard, but the juice fails to flow in the IDE. The first test says nothing about the problem I am solving. The coffee is insipid; the Ipod sounds like a broken digital record…perhaps the copy and paste coders are on to something…but I let that thought pass. Why bother with reuse if they refuse to use my API. Do I detect NIH?

Making a case for dogmatic Extreme Programming

Refactoring is now a common term in the developer’s vocabulary. However what most developers call refactoring is really hacking. Making a change to the code in the hope that it will work is somewhat analogous to smoking a cigarette whilst filling up your petrol tank. Both will inevitably end in tears. And this is where the dogma kicks in; it is not really refactoring unless you have a safety net of automated tests to verify that the state of the universe has not changed since you  last updated that pesky connection string. A bad test is better than no test at all. I guess that leads on to my next topic…false positives and false negatives. But for now get testing and buy a diesel.

9.6.09

Ok Mr. Customer…your software is late what are you gonna do now?

How do projects get late? Well as Brooks says in the seminal Mythical Man Month, “projects get late one day at a time”. That day the developers spent listening to that lousy presentation or fixing their development environments instead of delivering features, the project was getting late. That day the developers spent time chasing requirements because the product owner could not be bothered, the project was getting late. Kind of reminds me of the boiled frog metaphor.

Scrum and daily releases mitigate this risk, but the can only succeed in an enabling context.

No guts no glory…

Many enterprises pay lip service to agile. They wax lyrical about agility, but the minute one practices the principles that espouse agility, they run for cover in the command and control cave. This cavern contains the usual paraphernalia of document heavy processes, bottled-necked decision throughput and the waterfall methodology. Funny really, but not surprising as some corporate types are quick to latch on to the latest buzzwords without understanding the true essence of the discipline.

As they say…”talk is cheap”. The agile acid test has got to be: ”if it talks like agile;walks like agile and smells like agile then it is agile. A form of agile duck-typing if ever there was such a thing.

23.5.09

The Last Mile

How do you walk your software product through the wilderness of dumb users, dodgy machine builds and missing dependencies?

You want to reduce your support footprint and the dysfunction of dealing with false positives(i.e. it’s your software not my machine syndrome). If you have control of the deployment landscape then a standardised machine build and knowledgeable first line support come in handy. Otherwise online users forums and comprehensive user documentation will help share knowledge of work arounds, gotchas and technical Voodo. The last thing you want is for the development team to be constantly building patches to support all machines under the sun. Doing this robs the team of the precious time required to develop new features and enrich the product’s functionality.

Time and Tide and Software releases…

My team releases software every week regardless of the political landscape or inefficiencies of other departments. Ours is a disciplined and rigorous approach, we aim to be proactive and to keep the momentum forward, never backward. After all this is agile and we embrace the iterative and incremental nature of our game. To the gentlemen from the hydro-methodology this does not make sense. Surely you should only release when the business say so and when the business are ready for your release. If we did that then we would be constrained by the same inefficiencies that plague other teams. And that aint agile. So I suppose time and tide and software releases wait for no man.

2.5.09

Refactoring to a fluent configuration interface

The starting point was a configuration class that had the following definition:
public const decimal AdultFactor = 6;
public const decimal ChildrenFactor = 3;
public const decimal GasKwHMinEstThreshold = 4000;
:
These parameters are always used together in the context of a calculation for gas consumption. The client code uses these config values in the following fashion:

calculatedUsageKw = AdjustUsageForRadiators(calculatedUsageKw, GasConfiguration.AdultFactor,
                                            GasConfiguration.ChildrenFactor,
                                            GasConfiguration.RadiatorFactor,
                                            GasConfiguration.PersonsAtPropertyFactor);
I wanted to give the setting of the gas consumption parameters a fluid feel in the following style
public static IFluentConfiguration Create()
{
var fluentConfig = new GasConfigurationFluent();

fluentConfig.SetAdultFactor(6).SetChildrenFactor(3).
SetGasKwHMinEstThreshold(4000).SetGasPointsMinThreshold(12).
SetGasPointsToKwHConvFactor(400).SetGasWaterHeatingAmount(10).
SetHobGasCooker(4).SetMainRoomFireOnGas(16).
SetPersonsAtPropertyFactor(2).SetRadiatorFactor(5);

return fluentConfig;
}

The client code accessing the fluent configuration interface was refactored to:

private decimal? AdjustUsageForRadiators(decimal? calculatedUsageKw, IFluentConfiguration fluentConfig)

The GasConfigurationFluent has setters that take the folllowing general approach:
public IFluentConfiguration SetAdultFactor(decimal? adultFactor)
{
    AdultFactor = adultFactor;
    return this;
}
And yes my setters have a return value which is in effect the current context i.e. my fluent configuration.After applying this refactoring, a few things became apparent,
1. The method chaining enforces the fact that these parameters exist in the same context and somewhat provides a cohesive view of my configuration interface. The setting of the configuration values is much more concise.
2. The cohesive nature of this configuration hints at the fact that a DSL(Domain Specific Language) could possibly be built around the calculation of gas consumption.

1.5.09

Wide laps

If the benefits of having a wide sturdy lap were ever in question, then look no further. On the left is my company laptop firmly glued to my thigh. The other limb is supporting my XPS M1330; looking on in the background is the Inspiron 1525.

30.4.09

Yesterday’s scrum

created at TagCrowd.com

23.4.09

Dinner @ the inn

Yesterday was St. George's day and in celebration of the patron saint of England, I went to the local Inn where they had layed out the very best in English cuisine. This ranged from the curiously named toad in the hole to the delicious Shepherds Pie.


19.4.09

When Paradigms Collide- Sorting Spatial Data using the List Metaphor

Given a spatial dataset that comprises physical locations that must be traversed efficiently, what’s the simplest solution that would accomplish this task?

Trouble booting from CDROM on Sunblade 100.

Possible causes:

  • Corrupted media
  • Faulty CDROM drive
  • Non-bootable media
  • SunOS 5.9 limitations booting Ubuntu/Solaris 10

A lean view of scrum

The sources of waste are:

  • Third-party delays in providing inputs required for the upcoming sprints
  • Stories with insufficient detail to ensure that the team proceeds without interruption
  • Scrum is yet to gain traction; the old way of doing things still haunts the project
  • Delayed feedback
  • Developer distractions – meetings; problems with machines; network connectivity; complex build process

NIH…

When a developer inherits a codebase, there is always a case of NIH(not invented here) syndrome. I am guilty of it, but with experience and professional maturity I have become more objective in my evaluation of third-party codebases. Recently, I had the chance to work on a codebase that ticked all the boxes of what a good software application should contain. The codebase exhibited the following characteristics:

  • Unit testing and Mock objects
  • Dependency Injection
  • IoC Containers
  • MVC
  • N-tier architecture(and a slight hint of domain driven design)
  • Fluent Interfaces
  • NANT builds
  • Code-generation
  • Template meta-programming(generics)

These are fairly advanced concepts in software engineering and represent the software engineering zeitgeist. Unfortunately these practices will be lost on the majority of enterprise developers(unless you are working in a dedicated software shop that’s at the bleeding edge). For the typical enterprise developer, this translates into is software that is harder to maintain, extend and for which the sprint velocity is going to be dismal. The inheritors of the codebase have the additional task of understanding the domain and the engineering approach.

The question then becomes: how do you balance the need to create supple software systems whilst staying true to the software engineering discipline. Can you sacrifice the need for rigour in favour of comprehension and program understanding or vice-versa?

From XP Developer to ScrumMaster

As luck(or misfortune) would have it, I have been tasked with ensuring that the team delivers the right software on time and to the product owners’ satisfaction. My previous forays in extreme programming(XP) mean that there is zero cost of adoption of this new way of working. It could be argued that scrum and XP are polymorphic. They both inherit from the same agile shrub.

It is tempting to approach the scrum master role from a developers’ perspective given that I am a developer to the core. But this misses the essence of being a scrum master. I need to balance the demands of the product owner with those of the development team whilst being aware of the latent political interests that subtly influence the project. Some of these interests have a less than positive impact on the project and are impediments that need to be reported if the project is not to grind to a halt.

Agile is new to the organisation and the idea of product owners, sprint velocity, burn-down, sprints, product backlogs, sprint backlogs is alien in certain quarters. It is not uncommon to be asked to provide an estimate on the spot without consulting the team. My advice is don’t do it and don’t over commit the team.

24.3.09

HPC Windows Project back on track

After a turbulent winter, I have now resumed work on the beowulf cluster project. All my weekends and evenings for the next quarter are already spent. I wonder how I will sell this proposition to the girlfriend. Perhaps a chilvarous pitch might do the trick..."Darling, it's a climate change calculation engine..."

The next few weeks will see me configure the head node and install the SUN GRID ENGINE tools. Then the slave nodes will be configured and the obligatory benchmarking LINPACK tools will be executed to determine the cluster performance.

We have a release candidate

After weeks of tireless effort, we have a candidate release for the product I am currently working on. It's WPF all the way through. We have successfully ported a Winform application to WPF. The WPF Toolkit has proved in making this migration painless. We can use controls that we have come to know and love on the Winform UI. Datagrids, DatePickers, DateTimePickers you name it the WPF ToolKit has them. We have had to make changes to the way these controls work and this is all possible because we have the sourcecode available.

7.3.09

MVVM a view gone too far?

After spending a week spiking this for a tablet PC application, it  quickly dawned on me that the ViewModel, the data specialised for the view is really an adapter pattern. The model is adapted to suit the expectations of the WPF UI. Adapting the model for the view means that you can essentially ignore converters.

//TODO:Add code

“But it’s auto-generated, so why does it need to be unit tested?”

I recently overheard a conversation in which one developer was explaining the legacy code he had inherited to a new starter on the team. The developer described the code generators and the accompanying unit tests for the generated artifacts. Naturally the new developer asked why they needed unit tests if they were automagically making this stuff. This is a valid question and one which does beg a decent answer. I will try to answer the question based on my experience as a test-infected developer.

My response is as follows:

  1. Auto-generation of software artifacts does not guarantee correctness. This is a garbage-in-garbage-out view of code generation. Engineers need to be sure that the changes that will inevitably occur in the code generation templates do not introduce defects. By running unit tests against the generated code you are testing that the generators works as they should and that specified inputs produce predictable outputs. The unit tests enforce the implied contract which stipulates that my code generator should always produce artifacts that do X.
  2. Unit tests provide a reliable and repeatable way of regression testing the outputs of the generated artifacts. A change to the code templates might introduce subtle bugs. How else would you be able to detect these in the absence of automated unit testing. You are covering your own hide by having a battery of tests.
  3. Unit tests allow you to develop with courage. You get immediate feedback when defects are introduced into the codebase (this assumes you are continuously running your tests, which you should).
  4. The quality of the unit tests also dictates the testing experience. Several heuristics exist for developing good tests.

6.3.09

Working with legacy code

I have recently been tasked with migrating an MVC winform application to WPF. Looking at the existing codebase, I cannot reuse any of the controllers because they are very aware of the winforms view. This is fine because the controller handles gestures from the view and should thus know the view for which it controls. So we are faced with migrating the behaviour in our winforms-centric controllers to WPF controllers.

The model code is well developed as is the accompanying repository. However the need for master/detail views in WPF means that we are going to have to alter the readonly properties in the domain objects. But do we really want to change the legacy code?

21.2.09

Who should manage the layout of my modular UI (WPF context)?

The easy answer is : “the LayoutManager”. Does layout responsibility rest with the view or controller(ref MVC)? Whose business is it to know about the positioning of modular control xxx and whether it should be visible when an element in modular control zzz is clicked.

How do I make the best use of the MVC triad when I need to precisely manage my modular UI layout.

Do I need a view specific layout controller that is pixel and coordinate aware, perhaps a variation of the MVCC? Should the additional C reside in the xaml code behind or can I weave in the layout rules at runtime? When the scope for cross window navigation is severely restricted what pattern can I turn to?

My current problem domain brings these questions to the fore.

GUI layout tends to be orthogonal to the core concerns of application development, yet it is critical to the success and acceptability of the application. To the user, the UI is the application.

I will be prototyping various approaches to this vexatious issue in an attempt to develop a maintainable approach. At present a finite state approach in which the main actors are the view and some layout policy seems likely.

When everything is a DependencyProperty, life’s a beach…

When faced with a requirement to automatically generate parts of a modular UI based on events raised by interaction with other controls, the question is normally, how do I position the generate controls in such a way that there is no conflict between my design-time view of the UI and it’s runtime view.

Combining the grid layout and use of DependencyProperty resolves this dilemma

public VanillaLayoutTemplate()
{
InitializeComponent();
PositionGroupControlsInFirstColumn();
}

private void PositionGroupControlsInFirstColumn()
{
CaptureOutComeGroup.SetValue(Grid.ColumnProperty, 0);
ProductServicesGroup.SetValue(Grid.ColumnProperty,0);
}

The CaptureOutComeGroup and ProductServicesGroup represent a hidden control that is positioned in the first column. When the page is rendered, the 2 controls are overlapped and hidden; visibility is managed by the user providing various inputs.

WPF Toolkit and the DatePicker Control

The journey started out with a hunt for a date control that would form part of the user interface. The core WPF API does not come equipped with this capability so I decided rather than lose precious time trying to build one, I would Google for it.

A download from CodePlex later, I had the binaries  on the development machine and I was ready to rock. Lo and behold…there was none of the precious documentation that us developers have become fat on, but as I soon discovered integrating the control is pretty simple.

I added the new assembly reference to VS2008 and this was loaded as shown below:

image

The XAML for the integrated control is

<GroupBox Grid.Row="5" Header="Call back notes" FontSize="14" Margin="1,1,1,308" Visibility="Hidden" Name="CallBackNotesGroup">
<
StackPanel Grid.Row="5" Height="218" VerticalAlignment="Top">
<
TextBox Margin="2,2,2,2" Name="textBox1" TextWrapping="Wrap" MaxWidth="300" MaxHeight="120" BorderBrush="Black" Height="100" />
<
Controls:DatePicker>
</
Controls:DatePicker
>
<
Button Grid.Row="3" Margin="2,2,2,2" Name="button1" FontSize="14" Width="163" HorizontalAlignment="Left">Arrange call back</Button>
</
StackPanel>
</
GroupBox>


The rendered UI looks something like this:



image 



The control exhibits automatic polite validation behaviour which is activated when an invalid date string is entered in the text box. That’s one layer of validation that I don’t need to worry about!

15.2.09

Generating the lattice from first principles

In an attempt to model the binomial lattice, I will have to generate the lattice data structure and decorate it with various properties(probabilities, prices, node Id’s etc), so I thought I would approach the problem by first generating the node Ids for each time-step. So here we go:

public void GenerateLatticeNodes()
{
var numberOfTimeSteps = 20;
var nodes = new Hashtable();
var nodeId = 1;
for (var i = 0; i < numberOfTimeSteps; i++)
{
var numberOfNodes = i + 1;
var nodesAtTimeStep = new int[numberOfNodes]; ;
for (var j = 0; j < numberOfNodes; j++)
nodesAtTimeStep[j] = nodeId++;
nodes.Add(i,nodesAtTimeStep);
}
//What no test?
}


For the moment this is just a collection of identifiers to which the object richness( the notion of a node having asset price, probability of up/down movement etc) will be added.

The node numbering scheme adopted is such that the initial node is 1 and subsequent nodes are numbered incrementally in an up/down fashion.

9.2.09

Probability diffusion in the binomial lattice

Given that we can express the diffusion of asset prices in the binomial tree as a summation of the form

StockPriceDiffusion

The expected stock price at a given time-step can be expressed in terms of the probability, p, of an upward movement, u, of the underlying asset. Conversely the probability of a downward movement in the asset price is denoted as 1-p, where d, is the amount by which the stock moves down. Hence the expected asset price at the first time-step can be summarised as:

_pictures_6e8818db5897541a07a9ed229e960947_1234120512Why the vectors? Well I suppose there is a directional, albeit unidirectional, aspect to the diffusion. We are interested in heading towards the future and ascertaining the expected value of the asset at the expiry date.

The obvious question then becomes, “how do we generalize the findings above for a binomial tree with n time-steps?”

Probabilities general equation.latex

The probability diffusion for each time-step appears to follow the summation above. A few tests should be enough to prove whether this is actually the case.

6.2.09

Modelling the growth of asset prices in a binomial lattice

Why would we want to do this? Well one of the key decisions when pricing call options using a binomial lattice is whether it is optimal to exercise the option or let the option lapse. We need to know whether we are in the money or not. The value of the asset price in relation to the strike price is a good indicator. If the asset is cheaper in the spot market we let the option lapse, otherwise we exercise the option, buy the asset at the strike price and sell it on the spot market making ourselves a tidy little profit :).

image

Starting at time t=0 and given that the volatility of the asset is d and u, the asset prices for a given time-step follow the distribution:

asset.prices.latex

We then proceed to test the assertion above using NUnit 
(note the syntactic sugar which makes the testing of collections much more direct)


[Test]
public void StockPriceGrowth()
{
var t = 2;
var u = 1.1;
var d = 0.9;
var initialAssetPrice = 20;
var stockPrices = new ArrayList();
var expectedAssetPrices= new[] {24.2,19.8,16.2};

for (var i = 0; i <= t; i++)
{
var stockPrice = Math.Round(initialAssetPrice * (Math.Pow(u, t - i) * Math.Pow(d, i)), 2);
stockPrices.Add(stockPrice);
}
Assert.That(stockPrices,Is.EquivalentTo(expectedAssetPrices));
}
And hooray we get the greenbar!


image 
The stock price generation behaviour belongs to a utility class responsible for generating the asset prices
at given time-step, so we refactor the test to the following:
[Test]
public void StockPriceGrowth()
{
var t = 2;
var u = 1.1;
var d = 0.9;
var initialAssetPrice = 20;
var expectedAssetPrices= new[] {24.2,19.8,16.2};
var stockPrices = BinomialCalculator.StockPricesForTimeStep(t,u,d,initialAssetPrice);
Assert.That(stockPrices,Is.EquivalentTo(expectedAssetPrices));
}

5.2.09

Generalising the binomial lattice for pricing European options

As luck would have it, I am now building a binomial lattice pricing model that employs the concepts of risk neutral valuation, risk free rate and expected option values at a given time step. I am approaching the problem from a computational angle; I am investigating the computational efficiency and complexity of the resulting algorithm as well as the parallelizability of the code. For each node in the lattice I aim to calculate the polynomial representing the probability function. Summing these node polynomials for a given time step and applying backwardation should yield the net present value of the of the option. The scope of this investigation is limited to European call options, but with more work I should be able to extend this to price American options.

4.2.09

Southern Railway broken payment processing workflow

04-02-2009 18-55-54

Just how bad does it get? It seems the failure of the train system is really systemic. How could this happen on a live site that takes payments? It does not fill me with a lot of confidence; I guess the passwords are probably stored in clear text in the database.

This little episode transpired whilst I was trying to purchase a season ticket(annual train ticket). It happened right at the end of the workflow. The culprit: a missing include file; does this stuff ever get tested. Just because it works in the dev environment does not guarantee that it will work in the live.  You would have thought that the deployment process, plus configuration management and the army of QA testers would have caught this, but it seems they never made it into the office due to the snowy conditions. Sigh.

Automated unit and functional tests would most probably have created visibility of this issue.

11.1.09

Integrating NHibernate with WPF

A few things to watch out for when you do this integration:

  1. Ensure that the hibernate.cfg.xml file is present.
  2. Set the properties for the hibernate configuration file so that it is always copied to the output folder(bin/debug for debugging),

11-01-2009 20-49-01

failure to do will result in the following error: “An exception occurred during configuration of persistence layer.”

11-01-2009 20-36-18

Converting an XBAP to a WPF application

After starting out on the XBAP route and quickly hitting the sandbox limit it was time to actually move to an application paradigm that would support file access as well as database interaction. So I decided I had to convert the XBAP to a plain old WPF application.

I added a new Window1.xaml file and reconfigured the start url in the App.xaml file to point to the newly added file.

After messing around with  the property sheet for the solution, I proceeded to compile the solution and  instead ended up with the following error message.

11-01-2009 17-42-40

There was not an option to change the setting until I poked in the *.csproj( the project definition file) file which contained the relevant xml nodes <IsWebBootstrapper>, <HostInBrowser> and <TargetZone>. Delete these nodes and F5 the project.

<PropertyGroup>
<
Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
<
Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
<
SchemaVersion>2.0</SchemaVersion>
<
ProjectGuid>{DDA5DF2A-BCAF-4B2E-9A86-7E7F5F8292C7}</ProjectGuid>
<
OutputType>WinExe</OutputType>
<
AppDesignerFolder>Properties</AppDesignerFolder>
<
RootNamespace>CharityWorkSpace.Zainco.YCW.CRM</RootNamespace>
<
AssemblyName>CharityWorkSpace.YCW</AssemblyName>
<
TargetFrameworkVersion>v3.5</TargetFrameworkVersion>
<
FileAlignment>512</FileAlignment>
<
ProjectTypeGuids>{60dc8134-eba5-43b8-bcc9-bb4bc16c2548};{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}</ProjectTypeGuids>
<
WarningLevel>4</WarningLevel>
<
EnableSecurityDebugging>true</EnableSecurityDebugging>
<
StartAction>URL</StartAction>
<
HostInBrowser>true</HostInBrowser>
<
TargetZone>Internet</TargetZone>
<
GenerateManifests>false</GenerateManifests>
<
SignManifests>false</SignManifests>
<
ManifestKeyFile>Records Management_TemporaryKey.pfx</ManifestKeyFile>
<
ManifestCertificateThumbprint>3327D64CAF3D6C7423AD45AC8121C526DB8AB168</ManifestCertificateThumbprint>
<
IsWebBootstrapper>true</IsWebBootstrapper>

7.1.09

StaleStateException

NHibernate.StaleStateException: Unexpected row count: 0; expected: 1

at NHibernate.AdoNet.Expectations.BasicExpectation.VerifyOutcomeNonBatched(Int32 rowCount, IDbCommand statement)
at NHibernate.AdoNet.NonBatchingBatcher.AddToBatch(IExpectation expectation)
at NHibernate.Persister.Collection.AbstractCollectionPersister.Recreate(IPersistentCollection collection, Object id, ISessionImplementor session)
at NHibernate.Action.CollectionRecreateAction.Execute()
at NHibernate.Engine.ActionQueue.Execute(IExecutable executable)
at NHibernate.Engine.ActionQueue.ExecuteActions(IList list)
at NHibernate.Engine.ActionQueue.ExecuteActions()
at NHibernate.Event.Default.AbstractFlushingEventListener.PerformExecutions(IEventSource session)
at NHibernate.Event.Default.DefaultFlushEventListener.OnFlush(FlushEvent event)
at NHibernate.Impl.SessionImpl.Flush()
at NHibernate.Transaction.AdoTransaction.Commit()
at Zainco.YCW.Tests.Group.AddYCWGroup() in Group.cs: line 70

This usually means that your database is in an inconsistent state(now say that quickly):

UPDATE MyTable SET Description=’xxxx’  WHERE Id=123;

If an updateable record is not found , then the exception above is thrown. Ensure that you have applied the cascade attribute on the relevant associated objects.

2.1.09

Joined-Subclass exception

NHibernate.MappingException: 'extends' attribute is not found.

at NHibernate.Cfg.XmlHbmBinding.ClassBinder.GetSuperclass(XmlNode subnode)
at NHibernate.Cfg.XmlHbmBinding.MappingRootBinder.AddJoinedSubclasses(XmlNode parentNode)
at NHibernate.Cfg.XmlHbmBinding.MappingRootBinder.Bind(XmlNode node)
at NHibernate.Cfg.Configuration.AddValidatedDocument(NamedXmlDocument doc)

NHibernate.MappingException: Could not compile the mapping document: Zainco.YCW.Components.Mappings.Member.hbm.xml

at NHibernate.Cfg.Configuration.LogAndThrow(Exception exception)
at NHibernate.Cfg.Configuration.AddValidatedDocument(NamedXmlDocument doc)
at NHibernate.Cfg.Configuration.ProcessMappingsQueue()
at NHibernate.Cfg.Configuration.AddDocumentThroughQueue(NamedXmlDocument document)
at NHibernate.Cfg.Configuration.AddXmlReader(XmlReader hbmReader, String name)
at NHibernate.Cfg.Configuration.AddInputStream(Stream xmlInputStream, String name)
at NHibernate.Cfg.Configuration.AddResource(String path, Assembly assembly)
at NHibernate.Cfg.Configuration.AddAssembly(Assembly assembly)
at NHibernate.Cfg.Configuration.AddAssembly(String assemblyName)
at NHibernate.Cfg.Configuration.DoConfigure(IHibernateConfiguration hc)
at NHibernate.Cfg.Configuration.Configure()
at Zainco.YCW.Components.Utils.NHibernateHelper.get_Session() in
NHibernateHelper.cs: line 13
at Zainco.YCW.Tests.MemberTest.AddMember() in MemberTest.cs: line 70

If you get this exception then you have probably defined your joined-subclass outside the parent class, which is probably not a good idea if you are aiming to have highly cohesive and modular mapping files. The problem is easily resolved by either adding an extends attribute to the joined-subclass element

<joined-subclass name="KeyLeaderNationalTeamTasks"  table="KeyLeaderNationalTeamTasks" extends="Task">

or nesting the joined-subclass in the class definition.