Handling Sales Orders from Page based Web Services – in NAV 2009SP1 (and RTM)

First of all, there isn’t going to be a new post on every single record type on how to handle them from Web Services – but Sales Orders are special and the reason for the “(and SP1)” in the titel refers to the fact, that there are changes between RTM and SP1 or maybe a better way to state it is, that the way you could do it in RTM (that might lead to errors) is no longer possible – so you have to do it right.

Secondly, please read the post about the Web Service Changes in SP1 before reading this post – you can find that post here.

Working with the Sales Orders Page from Web Services in NAV 2009SP1

Just to recap a couple of facts about Page based Web Services – we are using the pages and the code on the pages to work with sales orders, this means that we need to mimic the way the RoleTailored Client works, and the RoleTailored Client doesn’t create the order header and all the lines in one go before writing anything to the database. Instead what really happens is that once you leave the primary key field (the Order No) it creates the Order Header in the table. The same with the lines, they are created and then you type data into them, after which they are updated.

So what we need to do is to create the sales order in 4 steps:

1. Create the Order Header
2. Update the Order Header
3. Create an Order Line
4. Update an Order Line
(repeat steps 3-4)

Now this doesn’t mean that you have to do 2 + (no of Orderlines)*2 roundtrips to the server (fortunately) – but you always need 3 roundtrips.

1. Create the Order Header
2. Update the Order Header and Create all Order Lines
4. Update all Order Lines

meaning that you can create all order lines in one go (together with updating header info) and you can update them all in one go.

a code sample for doing this:

// Create Service Reference
var service = new SalesOrder_Service();
service.UseDefaultCredentials = true;

// Create the Order header
var newOrder = new SalesOrder();
service.Create(ref newOrder);

// Update Order header
newOrder.Sell_to_Customer_No = “10000”;
// Create Order lines
newOrder.SalesLines = new Sales_Order_Line[2];
for (int idx = 0; idx < 2; idx++)
newOrder.SalesLines[idx] = new Sales_Order_Line();
service.Update(ref newOrder);

// Update Order lines
var line1 = newOrder.SalesLines[0];
line1.Type = SalesOrderRef.Type.Item;
line1.No = “LS-75”;
line1.Quantity = 3;
var line2 = newOrder.SalesLines[1];
line2.Type = NewSalesOrderRef.Type.Item;
line2.No = “LS-100”;
line2.Quantity = 3;
service.Update(ref newOrder);

After invoking Create(ref newOrder) or Update(ref newOrder) we get the updated sales order back from NAV, and we know that all the <field>Specified properties are set to true and all strings which has a value are not null – so we can just update the fields we want to update and call update(ref newOrder) again and utilize that SP1 only updates the fields that actually have changed.

This behavior is pretty different from NAV 2009 RTM web services, where it will write all fields to the database if you don’t set strings fields to NULL or set the <field>Specified to false (as described in my previous post).

Making the above code run in NAV 2009RTM

What we really want to do here, is to mimic the behavior of NAV 2009 SP1 in RTM – without having to change the logic.

So I went ahead and wrote two functions. One for making a copy of a record (to be invoked right after your Create(ref xx) or Update(ref xx)) – so we now have a copy of the object coming from NAV. Another function for preparing our object for Update (to be invoked right before calling Update(ref xx)) – to compare our object with the old copy and set all the unchanged fields <field>Specified to false and all unchanged string fields to null.

The two functions are listed towards the end of this post.

Our code from above would then look like:

// Create Service Reference
var service = new SalesOrder_Service();
service.UseDefaultCredentials = true;

// Create the Order header
var newOrder = new SalesOrder();
service.Create(ref newOrder);
SalesOrder copy = (SalesOrder)GetCopy(newOrder);

// Update Order header
newOrder.Sell_to_Customer_No = “10000”;
// Create Order lines
newOrder.SalesLines = new Sales_Order_Line[2];
for (int idx = 0; idx < 2; idx++)
newOrder.SalesLines[idx] = new Sales_Order_Line();
PrepareForUpdate(newOrder, copy);
service.Update(ref newOrder);
copy = (SalesOrder)GetCopy(newOrder);

// Update Order lines
var line1 = newOrder.SalesLines[0];
line1.Type = SalesOrderRef.Type.Item;
line1.No = “LS-75”;
line1.Quantity = 3;
var line2 = newOrder.SalesLines[1];
line2.Type = SalesOrderRef.Type.Item;
line2.No = “LS-100”;
line2.Quantity = 3;
PrepareForUpdate(newOrder, copy);
service.Update(ref newOrder);

and this code would actually run on SP1 as well – and cause smaller packages to be sent over the wire (not that I think that is an issue).

Deleting a line from an existing Sales Order

Now we have seen how to create a Sales Order with a number of lines – but what if you want to delete a line after having saved the Sales Order. On the Service object you will find a method called Delete_SalesLines, which takes a key and delete that Sales Line.

service.Delete_SalesLines(line.Key);

The only caveat to this is, that if you want to do any more work on the Sales Order, you will have to re-read the Sales Order, else you will get an information that somebody changed the record (and that would be you).

So deleting all lines from a Sales Order could be done by:

foreach (Sales_Order_Line line in so.SalesLines)
service.Delete_SalesLines(line.Key);

and then you would typically re-read the Sales Order with the following line:

so = service.Read(so.No);

That wasn’t so bad.

My personal opinion is that we should change the Delete_SalesLines to be:

service.Delete_SalesLines(ref so, line);

Which is why I created a function that does exactly that:

void Delete_SalesLines(SalesOrder_Service service, ref SalesOrder so, Sales_Order_Line line)
{
Debug.Assert(so.SalesLines.Contains<Sales_Order_Line>(line));
service.Delete_SalesLines(line.Key);
so = service.Read(so.No);
}

Note, that Í just re-read the order, loosing any changes you have made to the order or order lines. Another approach here could be to remove the line deleted from the lines collection, but things becomes utterly complicated when we try to mimic a behavior in the consumer that IMO should be on the Server side.

A different approach would be to create a codeunit for deleting lines and expose this as an extension to the page (functions added to the page), but we would gain anything, since we still would have to re-read the order afterwards.

Adding a line to an existing Sales Order

More complicated is it, when we want to add a line to an existing Sales Order through the Sales Order Page.

Actually it isn’t complicated to add the line – but it is complicated to locate the newly added line after the fact to do modifications, because it is still true that you need to add the line first and then modify the line afterwards (and update the order).

Adding the line is:

// Create a new Lines array with only the new line and update (meaning create the line)
so.SalesLines = so.SalesLines.Concat(new [] { new Sales_Order_Line() }).ToArray();
service.Update(ref so);

This add’s a new line to the array of lines and update the order.

After invoking update the newly added line is the last in the array (unless somebody messed around in the app-code and made this assumption false).

My personal opinion is that we should add another method to the service called

service.Add_SalesLines(ref so, ref line);

so that we would have the newly added line available to modify and the so available for service.Update(ref so), which is why I created a function that does exactly that:

Sales_Order_Line AddLine(SalesOrder_Service service, ref SalesOrder so)
{
// Create a new Lines array with only the new line and update (meaning create the line)
so.SalesLines = so.SalesLines.Concat(new [] { new Sales_Order_Line() }).ToArray();
service.Update(ref so);
return so.SalesLines[so.SalesLines.Length-1];
}

Again – If this method existed server side, automatically added by NAV WS, it would be able to do the right thing even though people had mangled in the application logic and change the line numbering sequence or whatever.

A different approach would be to create a codeunit for adding lines and expose this as an extension to the page (functions added to the page). It wouldn’t make the consumers job much easier since we would still have to have any updates to the SalesOrder written before calling the function AND we would have to re-read the sales order after calling the function.

Remember, that if you are using this function from NAV 2009 RTM you might want to consider using PrepareForUpdate before AddLine and GetCopy after AddLine, just as you would do with Update. You could even add another parameter and have that done by the function itself.

Working with other Header/Line objects

Although the samples in this post are using the Sales Orders, the same pattern can be reused for other occurrences of the Header/Line pattern. Just remember that the Header needs to be created first, then you can update the header and create the lines – and last (but not least) you can update the lines.

GetCopy and PrepareForUpdate

Here is a code-listing of GetCopy and PrepareForUpdate – I have tested these functions on a number of different record types and they should work generically

/// <summary>
/// Get a copy of a record for comparison use afterwards
/// </summary>
/// <param name=”obj”>the record to copy</param>
/// <returns>a copy of the record</returns>
object GetCopy(object obj)
{
Type type = obj.GetType();
object copy = Activator.CreateInstance(type);
foreach (PropertyInfo pi in type.GetProperties())
{
if (pi.PropertyType.IsArray)
{
// Copy each object in an array of objects
Array arr = (Array)pi.GetValue(obj, null);
Array arrCopy = Array.CreateInstance(arr.GetType().GetElementType(), arr.Length);
for (int arrIdx = 0; arrIdx < arr.Length; arrIdx++)
arrCopy.SetValue(GetCopy(arr.GetValue(arrIdx)), arrIdx);
pi.SetValue(copy, arrCopy, null);
}
else
{
// Copy each field
pi.SetValue(copy, pi.GetValue(obj, null), null);
}
}
return copy;
}

/// <summary>
/// Prepare record for update
/// Set <field> to null if a string field hasn’t been updated
/// Set <field>Specified to false if a non-string field hasn’t been updated
/// </summary>
/// <param name=”obj”>record to prepare for update</param>
/// <param name=”copy”>copy of the record (a result of GetCopy)</param>
void PrepareForUpdate(object obj, object copy)
{
Debug.Assert(obj.GetType() == copy.GetType());
Type type = obj.GetType();
PropertyInfo[] properties = type.GetProperties();
for(int idx=0; idx<properties.Length; idx++)
{
PropertyInfo pi = properties[idx];
if (pi.Name != “Key” &&
pi.GetCustomAttributes(typeof(System.Xml.Serialization.XmlIgnoreAttribute), false).Length == 0)
{
if (pi.PropertyType.IsArray)
{
// Compare an array of objects – recursively
Array objArr = (Array)pi.GetValue(obj, null);
Array copyArr = (Array)pi.GetValue(copy, null);
for (int objArrIdx = 0; objArrIdx < objArr.Length; objArrIdx++)
{
object arrObj = objArr.GetValue(objArrIdx);
PropertyInfo keyPi = arrObj.GetType().GetProperty(“Key”);
string objKey = (string)keyPi.GetValue(arrObj, null);
for (int copyArrIdx = 0; copyArrIdx < copyArr.Length; copyArrIdx++)
{
object arrCopy = copyArr.GetValue(copyArrIdx);
if (objKey == (string)keyPi.GetValue(arrCopy, null))
PrepareForUpdate(arrObj, arrCopy);
}
}
}
else
{
object objValue = pi.GetValue(obj, null);
if (objValue != null && objValue.Equals(pi.GetValue(copy, null)))
{
// Values are the same – signal no change
if (pi.PropertyType == typeof(string))
{
// Strings doesn’t have a <field>Specified property – set the field to null
pi.SetValue(obj, null, null);
}
else
{
// The <field>Specified is autogenerated by Visual Studio as the next property

idx++;
PropertyInfo specifiedPi = properties[idx];
// Exception if this assumption for some reason isn’t true

Debug.Assert(specifiedPi.Name == pi.Name + “Specified”);
specifiedPi.SetValue(obj, false, null);
}
}
}
}
}
}

That’s it, not as straightforward as you could have wished for, but SP1 definitely makes things easier once it comes out.

And I will be pushing for a better programming model for this in v7.

Enjoy

Freddy Kristiansen
PM Architect
Microsoft Dynamics NAV

Web Services changes in NAV 2009 SP1

NAV 2009 SP1 is being released later this year, so why write about it now?

The main reason is, that NAV 2009 SP1 comes out with a couple of changes, you might want to take into consideration when writing towards NAV 2009 Web Services.

Except for some performance enhancements, the major changes are:

New standard encoding of the Company name

I think one of the questions that has been asked the most is, how does my company name look in the URL: http://server:7047/DynamicsNAV/WS/<company>/

we all know that “CRONUS International Ltd.” becomes “CRONUS_International_Ltd”. Lesser known is it that “CRONUS USA, Inc.” becomes “CROUNS_USA_x002C__Inc” and there are a lot of special cases in this encoding algorithm.

In NAV 2009 SP1 we change to a standard Uri EscapeDataString function, meaning that

CRONUS International Ltd. becomes CRONUS%20International%20Ltd.

CRONUS USA, Inc. becomes CRONUS%20USA%2C%20Inc.

and

CRONUS+,æøå becomes CRONUS%2B%2C%C3%A6%C3%B8%C3%A5

fact is that you actually just can type in the Company name in the URL with the special characters in the browser and it will (in most cases) figure out to select the right company, even in Visual Studio when making your Web Reference (this won’t work if you have / or ? in the company name).

btw you can escape all the characters if you wish to

http://localhost:7047/DynamicsNAV/WS/%43RONUS%20International%20Ltd./Services

is a perfectly good company name – if you prefer Hex over characters.

This change has also affected the return values of the Companies function in SystemService – it now returns the un-encoded company names (= clear text). You can not any longer use the output from the companies function to build your URL – you need to escape the name.

Note: There is no backwards compatibility, trying to access webservices with a URL from NAV 2009 will fail, you need to change the company name encoding.

Schema changes in ReadMultiple and UpdateMultiple

Microsoft Infopath couldn’t modify multiple records using the NAV2009 page based Web Services due to a schema incompatibility. In NAV 2009 SP1 the schema changes for these methods. If you are using proxy based Web Service access (the add service or web reference in Visual Studio) you should just update the reference. If you are using XML Web Services you might have to modify the code used to parse the XML.

I will of course modify the samples on my blog where I use XPath to query the XML.

Updating records in Page based web services only updates the fields that you actually changed

The basics of XML Web Services is, that you send an XML document to a WebServices telling what you want to change. Visual Studio makes it easy to create a reference to a Web Service and get Strongly typed access to f.ex. Customers and Sales Orders through pages.

But how do we tell Web Services which fields actually changed?

For this, Visual Studio autogenerates a <field>Specified boolean property for all non-string fields from NAV and we will change ALL the fields, where <field>Specified is true or where a string is not NULL – NULL in a string value doesn’t mean clear the field, it means don’t update the field.

If you want to clear a field, set the value to String.Empty (“”).

In some cases this have caused problems. Primarily because when you read a customer record in order to change his name, it comes back from the Get function with all <field>Specified set to TRUE and all string fields have content. Changing the Name of a customer – writes the NAME and since the SEARCHNAME is included in the data sent to Web Services that gets updated as well (meaning that NAME and SEARCHNAME could be out of sync).

In NAV 2009 SP1 that has changed. Visual Studio of course still uses <field>Specified and string field <> NULL to determine what comes over the wire, but on the NAV side we only persist what you actually changed, so in NAV 2009 SP1 you can do:

Customer customer = custService.Read(“10000”);
customer.Name = “The Cannon Group, Inc.”;
custService.Update(ref customer);

and it will only update the name of the Customer. In NAV 2009 you would have to either set all the other fields in the customer to NULL or <field>Specified to false in order to get the same behavior – OR you could do like this:

Customer readCustomer = custService.Read(“10000”);
Customer updateCustomer = new Customer();
updateCustomer.Key = readCustomer.Key;
updateCustomer.Name = “The Cannon Group, Inc.”;
custService.Update(ref updateCustomer);

Which also will update only the name (just a small trick, instantiating a new Customer() will have all string fields set to NULL and <field>Specified for other fields set to false – and now we can just set the fields we want to change. Remember setting <field>Specified to true for all non-string fields.).

Note that this will of course work in SP1 as well and the advantage here is that you actually only send the new customer name over the wire to the Web Service.

Changes to how you Create and Updating Sales Orders through a page based Web Service

Actually the way you need to work with Sales Orders in NAV 2009 SP1 through a page based Web Service will also work in NAV 2009 – but the other way around is a problem. In NAV 2009 you could create a sales order with lines with just one call to Web Services, but in reality this didn’t work, you need to do this with a couple of roundtrips.

This is because application code (CurrPage.UPDATE) relies on one kind of transactional behavior (the new order is inserted and committed before any validation trigger starts), but Web Services enforce a different kind (every call to server is wrapped into one atomic transaction that is either committed or rolled back entirely – meaning that insert is not committed until all validation triggers passed).

I will create a special post on how to work with Sales Orders from Web Services – and try to show a way, which works for NAV 2009 SP1 (the same method will work for NAV 2009 as well – so you should consider this early).

Web Services doesn’t change the users default company

A Web Service consumer application would change the users default company in NAV 2009, but since Web Services doesn’t really use the notion of a default company for anything this seemed confusing – and made it impossible for a web service consumer application to call a web service method to request the users default company. In NAV 2009 SP1 – invoking a Web Service method does not change the default company for the user.

Blob fields in Page based Web Services are ignored

In NAV 2009 you couldn’t have a BLOB field on a page (image or whatever), which you exposed as a Web Service.

In NAV 2009 SP1, this has changed. This doesn’t mean that NAV transfers the content of the BLOB to the web service consumer – the field is just ignored.

If you want access to the value of the Blob you will need to write some code, which you can read something about here :

http://blogs.msdn.com/freddyk/archive/2008/11/04/transferring-data-to-from-com-automation-objects-and-webservices.aspx

Codeunits ignores methods with unsupported parameters

Like Pages with unsupported fields (BLOB), also codeunits can be exposed as Web Services even though they contain methods that use parameter types, that are not supported by Web Services. This could be streams, automation, blobs and more.

In SP1 you can connect to NAV Web Services from both PHP and Java

I won’t cover the details in this post, but it should be clear that NAV WebServices are accessible from PHP and Java come SP1. As soon as I have a build, which supports this – I will write a couple of posts on how to do this.

Enjoy

Freddy Kristiansen
PM Architect
Microsoft Dynamics NAV

Installing NAV 2009 without A/D

The majority of installations of NAV 2009 will be in a network environment, you will have a domain server and an Active Directory with your users and your biggest worries will be how to setup a 3T environment on 3 different boxes, getting the delegation setup correctly all of that stuff.

But in a few cases we might run into situations, where there is only a LAN between a number of machines and no server.

In this post I will show what it takes to make one XP computer run the RoleTailored Client (or the Classic) on one XP computer and have another XP computer be the Database Tier, Service Tier – and of course run a client too.

WorkXP1 and WorkXP2

First of all – I have installed two XP computers with XP SP3 and nothing else – no Office, no Visual Studio, no nothing – just plain good old XP.

In both XP’s I have used the Set up a home or small office network wizard to make the computers join a workgroup and be able to see each other

image

The computers are named WorkXP1 and WorkXP2 – and there is one user on each computer XP1 on WorkXP1 and XP2 on WorkXP2.

So what I want to do, is to install the full NAV 2009 on WorkXP1 and only the Clients on WorkXP2 – and make WorkXP2 run with WorkXP1 as the Service Tier and Database tier for the RTC or the Classic. I will also see that I can in fact run Web Services from WorkXP2 as well.

I did not try to install the database tier separate from the Service Tier, as I do not think it is relevant in a scenario like this.

Installing the full NAV2009 on WorkXP1

Note, that I cannot just insert the DVD and press Install Demo, because I don’t have Microsoft Outlook installed, so I select Choose an installation option and then Customize under Server (or Client or Database Components) and select the following options

image

and then just accept the default setup values and install.

Installing the Clients on WorkXP2

on the WorkXP2 we install both clients – Again Choose an Installation option and select customize under Client and Select to run Classic from My Computer.

image

After selecting Next you will need to Click the RoleTailored Client link and specify the Service Tier computer (which is WorkXP1).

image

Installing…

If you wonder how – I am running this experiment as two virtual machines on a Windows 2008 Server running Hyper-V.

image

image

Workgroup networking

As you might have guessed – starting the clients on WorkXP1 just works, and trying the same on WorkXP2 doesn’t.

We need to do 3 things in order to make this work…

1. Create the XP2 user on WorkXP1

The way workgroups works is to authenticate the XP2 user on WorkXP1 using his own Username and Password, so you need to create all the users on the “server”, and note, then need to have the same password.

image

After this you of course need to create the user in NAV (using the Classic Client on WorkXP1) and give him the right permissions (In my world everybody is SUPER).

Having done this, we try to start the Classic Client on the WorkXP2 machine and get the following result:

image

This means that the firewall is blocking access to the SQL Server.

2. Open the firewall for SQLSERVR (if you want to run Classic)

On WorkXP1 open the Control Panel and open Windows Firewall. Click Add Program, browse and locate SQLSERVR.EXE under C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Binn add this to the list of exceptions:

image

At the top of the list of exceptions you will BTW. find DynamicsNAVServer – which means that the RoleTailored actually doesn’t need this setting.

Trying to connect from the Classic Client now will give a different error:

image

The reason for this is that XP by default is running Simple Sharing, meaning that everybody will login as Guest on a different computer and not trying to login with their own Username and Password.

3. Disable Simple Filesharing on WorkXP1

On WorkXP1 in the Control Panel or any explorer window select Tools -> Folder Options, goto the View Tab and remove the checkmark from Use simple file sharing.

image

After having done this – you should be able to connect using the Classic Client and also the RoleTailored Client should now work.

image

 

It is probably not any different than what you would do with NAV 5.0 – but a number of people has asked me whether or not it is possible – and it is.

BTW – Web Services works in exactly the same way. If you start the Web Service listener on WorkXP1 – you can navigate to

http://WorkXP1:7047/DynamicsNAV/WS/CRONUS%20International%20Ltd/Services (this is the SP1 URL)

You might have to open the firewall for port 7047 if it doesn’t pick up that DynamicsNAVServer is listening on both ports.

 

Enjoy

 

Freddy Kristiansen
PM Architect
Microsoft Dynamics NAV

Using LINQ with NAV Web Services

In .NET 3.5 Microsoft released LINQ (Language INtegrated Query) – which is a way of writing select statements strongly typed inside your C# code.

Wouldn’t it be nice if we could leverage this technology when connecting to NAV Web Services?

I thought YES – so I set out to find out what it took…

The goal was to be able to replace code like this:

CustomerRef.Customer_Service cservice = new CustomerRef.Customer_Service();
cservice.UseDefaultCredentials = true;

CustomerRef.Customer_Filter filter = new CustomerRef.Customer_Filter();
filter.Field = CustomerRef.Customer_Fields.Location_Code;
filter.Criteria = “=YELLOW|=BLUE”;
CustomerRef.Customer_Filter[] filters = new CustomerRef.Customer_Filter[] { filter };

CustomerRef.Customer[] customers = cservice.ReadMultiple(filters, null, 0);

foreach (CustomerRef.Customer customer in customers)
{
// do stuff
}

with code like this:

CustomerRef.Customer_Service cservice = new CustomerRef.Customer_Service();
cservice.UseDefaultCredentials = true;

var cquery = from c in new FreddyK.NAVPageQuery<CustomerRef.Customer>(cservice)
where c.Location_Code == “YELLOW” || c.Location_Code == “BLUE”
select c;

foreach (CustomerRef.Customer customer in cquery)
{
// do stuff
}

Which I personally find more readable and has a lot of other advantages:

  1. Strongly typed criterias – if fields change type you get a compile error after updating the Web Reference
  2. Don’t have to think about CultureInfo (NumberFormat and DateFormat) when doing your criteria
  3. Don’t have to duplicate quotes for your search string
  4. Don’t have to convert booleans into Yes/No
  5. Separation of Query and Values in the Query
  6. and probably a lot more…

I always heard that you only needed 3 good reasons for doing something – so this should be more than enough.

Understanding LINQ

Lets look at the above code. The following

CustomerRef.Customer_Service cservice = new CustomerRef.Customer_Service();
cservice.UseDefaultCredentials = true;

initializes the Service (as usually) and

var cquery = from c in new FreddyK.NAVPageQuery<CustomerRef.Customer>(cservice)
where c.Location_Code == “YELLOW” || c.Location_Code == “BLUE”
select c;

creates a query (which contains an expression tree of the where clause) and

foreach (CustomerRef.Customer customer in cquery)
{
// do stuff
}

invokes the query (calls Web Services ReadMultiple) and returns the customers.

FreddyK.NAVPageQuery is a class, which is capable of acting as a LINQ provider for all pages exposed as Web Services.

The NAVPageQuery class is a generic class where you specify the type returned by the service – and you specify the service object as a parameter to the constructor. No need to say, that these two must match, else you will get an exception.

For a class to be usable in a LINQ query it needs to implement IQueryable<type> (which is a combination of IQueryable, IEnumerable<type> and IEnumerable) – and typically you will then have another class, which implements IQueryProvider. In my sample, I have implemented both interfaces in the NAVPageQuery.

Instead of listing the interfaces, I will list what happens when the query is build (when the line with var cquery = is executed):

  1. The NAVPageQuery gets instantiated
  2. LINQ calls the method IQueryProvider IQueryable.Provider to get the IQueryProvider class
  3. LINQ calls the method System.Linq.Expressions.Expression IQueryable.Expression to get the initial expression
  4. Now LINQ builds the expression tree and calls IQueryable<S> IQueryProvider.CreateQuery<S>(Expression expression) from which it expects a IQueryable<type> object.

Now we are done building the query and our variable cquery is set to be of type IQueryable<Customer>

When we then do a foreach on the cquery a little later, the following this happens:

  1. foreach calls into the IEnumerable interface IEnumerator<T> IEnumerable<T>.GetEnumerator() and ask for an enumerator
  2. In this method, I call the Execute method (TResult IQueryProvider.Execute<TResult>(Expression expression)) on the Provider (which is ‘this’)
  3. In the Execute method I run through the Expression tree and builds a number of Filter objects, collect them into an array of filters and call ReadMultiple.

You could argue that it would be a good idea to run through the expression tree while building the query – but the primary reason for NOT doing this is that you could imagine queries like:

decimal amount = 10000;

var cquery = from c in new FreddyK.NAVPageQuery<CustomerRef.Customer>(cservice)
where c.Balance_LCY > amount
select c;

In this case you want to separate building the query and creating the filters, since you could:

foreach (CustomerRef.Customer customer in cquery)
{
// do stuff
}

amount = 100000;
foreach (CustomerRef.Customer customer in cquery)
{
// do other stuff
}

this demonstrates the need for having the ability to decouple the query and the values used by the query and this is the reason for not creating the filter specs while building the query – but keeping the expression tree.

Another sample:

cquery = from c in new FreddyK.NAVPageQuery<CustomerRef.Customer>(cservice)
where c.Last_Date_Modified >= DateTime.Today.AddDays(-5)
select c;

At the time of execution, this query will give you customers modified the last 5 days (imagine building that one with filters easy)

Important things to notice

It is important to realize that NAVPageQuery doesn’t give you any functionality you didn’t have before. With the original Filter class you cannot OR things together (unless it is the same field) – you also cannot do that using LINQ (towards NAV Web Services)

If you try to type in a where clause like

where c.Location_Code == “YELLOW” || c.Country_Region_Code == “US”

It will fail – intentionally!

Of course I could have done client side filtering – but this would only fool the developer, since you would take the performance hit when calling ReadMultiple and you wouldn’t really notice – I wanted the developer to be aware when he makes clever or not so clever choices.

So – the where clause needs to be simple enough so that I can convert it into a filter specification.

The error will be thrown when you try to use the query and it will typically be an exception stating “Query too complex” (even though it might not be complex at all).

Another thing to notice is that AND’s and OR’s cannot be combined (as this isn’t possible in the filter spec):

where c.Location_Code == “YELLOW” && (c.Balance_LCY > amount || c.Location_Code == “BLUE”)

is not allowed, but

where c.Balance_LCY > amount && (c.Location_Code == “YELLOW” || c.Location_Code == “BLUE”) && c.No != “”

works just fine.

What about updating, deleting and creating?

Nothing new there – you still have your service object and the Customer objects that are returned from the LINQ query is the same object you would get from reading through Readmultiple directly.

This class is only designed to make queries towards web services easier and more error safe.

That’s all good – but how is it made?

For once, I am not going to go through how NAVPageQuery is created in detail – I have commented the source code pretty detailed and if you need to understand how it is build, you can look at the sourcecode combined with the information on how things are executed (from above) and it should be possible to decrypt the source:-)

If you download the solution .zip file here, you will find a class library with one class (NAVPageQuery) and a Console application called TestQuery. The console app. can be deleted if you don’t want it and it has a number of samples – some of which are listed above.

When downloading the test app. – you need to update the web references (so that they match your exposed Pages) – I know that I added the Last_Date_Modified to the Customer Page in order to have access to that field through web services – there might be more.

Questions and comments are welcome, I prefer to answer questions on mibuso instead of here – since it typically makes the answer available to more people.

Enjoy

Freddy Kristiansen
PM Architect
Microsoft Dynamics NAV

Edit In Excel R2 – Part 1 (out of 2)

This post assumes that you have read the 4 step walkthrough of how to build the Edit In Excel demo from November 2008. You can find the parts here: Part 1, Part 2, Part 3, Part 4 and the Bug Fix.

In this post I will talk about what is needed in order to be able to save an Excel spreadsheet on a local disc, edit it offline and then submit your changes later.

My first assumption was that this was just a few small changes, but I should learn more…

Goal

The success scenario is the one where a NAV user decides to take the customer table offline in Excel to do modifications. The Excel spreadsheet is saved to a local hard drive and edited while not connected to the network.

When the user reconnects to the network, he can submit his changes, which will be send to NAV through Web Services and validated according to business rules.

The location of NAVTemplate.vsto

Let’s just try to save the spreadsheet in our Documents folder and see what happens if we open the spreadsheet. Not surprisingly we get an error telling us that it was unable to locate NAVTemplate.vsto

image

An immediate solution is to save the spreadsheet next to the .vsto and then it seems to work better, but the spreadsheet is not attached to NAV anymore, we cannot save changes and reload crashes with an exception.

The .vsto file is our deployment manifest, which is our managed code extension to the spreadsheet and of course this is needed in order for the spreadsheet to work.

You can read more about the architecture of Document-Level Customizations here:

http://msdn.microsoft.com/en-us/library/zcfbd2sk.aspx

Looking at the ServerDocument interface,which we use in the NAVEditInExcel solution, it has a property called DeploymentManifestUri, which is the location of the .vsto file. Adding the following line to the NAVEditInExcel project

serverDoc.DeploymentManifestUrl = new Uri(System.IO.Path.Combine(System.IO.Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location), “NAVTemplate.vsto”));

will cause the document to be created with an absolute reference to the NAVTemplate.vsto, and this will solve the problem with the spreadsheet not being able to locate the .vsto. In fact if this is a network location, it should even be possible to send the Excel spreadsheet to somebody else who should be able to modify the document and post the changes.

When doing this I decided to make another change in the NAVEditInExcel project. As you know it creates a temporary template in the same location as the DLL and the VSTO – and then it deletes it again. This is really not the preferred location of temporary files – we should create temporary files in the Temp folder so we change the line setting the template name to:

string template = System.IO.Path.Combine(System.IO.Path.GetTempPath(), page + “.xltx”);

since we don’t need to have the .xltx next to the .vsto anymore.

What do we need to save?

Members of the Sheet1 class marked with [Cached] will be saved together with the document – we know that from the way we transfer the page name and the current view to the spreadsheet. Thinking about it, the only things we need to save together with the document is the dataTable (which at any time contains the changes made by the user) and the objects collection (which are the objects returned from the Web Service).

The DataTable class implements IXmlSerializable and the objects collection is (as we know) an array of objects returned from a web Service provider and since these objects where send over the wire from a Web Service they of course also implements IXmlSerializable.

The fields collection cannot be saved, since the NAVFieldInfo class uses the PropertyInfo class, which cannot be serialized. The Service connection of course cannot be serialized either – nor can the datalist class – as the matter of fact, the datalist class shouldn’t be a member at all, it should be moved to the AddDataToExcel method as a local variable.

Problem now is, that if we just mark the dataTable and objects members with [Cached] we need to initialize them in the NAVEditInExcel project and there is no way we can instantiate the objects collection at that time with the right type.

A little more on cached data objects in Office documents

http://msdn.microsoft.com/en-us/library/ms178808(VS.80).aspx

and then how to programmatically add members to the cached objects

http://msdn.microsoft.com/en-us/library/48b7eyf3(VS.80).aspx

using this knowledge gives us the following two lines, we want to add after we have instantiated the dataTable and the objects array.

// Add dataTable and objects to the Caching collection
this.StartCaching(“dataTable”);
this.StartCaching(“objects”);

Now we just need to determine that the spreadsheet was loaded from a file (and not started from NAV) and then act differently.

Refactoring some code

We need to do a little refactoring of code in order to make things work. In the existing solution the PopulateFieldsCollection method creates the fields collection, but it also creates an empty dataTable class. Since we now store the dataTable class and not the fields collection we need as the first thing in the new spreadsheet to create the fields collection (a lot of things is depending on this). This is the new PopulateFieldsCollection (and AddField):

/// <summary>
/// Populate Fields Collection with NAVFieldInfo for all properties in the record
/// Should works with any NAV 2009 Page exposed as WebService
/// </summary>
/// <param name=”objType”>Type of Object (typeof(Customer), typeof(Vendor), …)</param>
/// <param name=”fieldsType”>Type of the Enum holding the property names</param>
private void PopulateFieldsCollection(Type objType, Type fieldsType)
{
this.fields = new List<NAVFieldInfo>();

    // Key property is not part of the Enum
// Add it manually as the first field
AddField(“Key”, objType);

    // Run through the enum and add all fields
foreach (string field in Enum.GetNames(fieldsType))
{
AddField(field, objType);
}
}

/// <summary>
/// Create a NAVFieldInfo for a field
/// </summary>
/// <param name=”field”>Field name</param>
/// <param name=”objType”>Type of Object in which the field is (typeof(Customer), typeof(Vendor), …)</param>
private void AddField(string field, Type objType)
{
field = NAVFilterHelper.VSName(field);
PropertyInfo pi = objType.GetProperty(field, System.Reflection.BindingFlags.Public | System.Reflection.BindingFlags.Instance);
if (pi != null)
{
NAVFieldInfo nfi = NAVFieldInfo.CreateNAVFieldInfo(field, pi, objType.Namespace);
if (nfi != null)
{
// If we encounter unknown Field Types, they are just ignored
this.fields.Add(nfi);
}
}
}

The Load function in the old spreadsheet does a number of things. It populates the Fields collection, loads the data, populates the dataTable and add the dataTable to the spreadsheet.

We will remove and create one function called LoadDataTable, which will create a dataTable (based on the fields collection), load the data from Web Services and populate the dataTable.

/// <summary>
/// Load Records from NAV via Web Services
/// </summary>
private void LoadDataTable()
{
// Create Data Table object based on fields collection
this.dataTable = new DataTable(this.page);
foreach (NAVFieldInfo nfi in this.fields)
{
this.dataTable.Columns.Add(new DataColumn(nfi.field, nfi.fieldType));
}

    SetFilters(this.view);
this.objects = this.service.ReadMultiple();
// Populate dataTable with data
foreach (object obj in this.objects)
{
DataRow dataRow = this.dataTable.NewRow();
foreach (NAVFieldInfo nfi in this.fields)
{
dataRow[nfi.field] = nfi.GetValue(obj);
}
this.dataTable.Rows.Add(dataRow);
}
this.dataTable.AcceptChanges();
}

As you can see, this is pieces of the PopulateFieldsCollection, Load and PopulateDataTable functions – as the matter of fact, you can delete the PopulateDataTable function as well. BTW the AcceptChanges call was moved from AddDataToExcel it needs to be together with the code populating the dataTable.

Right now my startup code in my spreadsheet has changed to

if (this.service != null)
{
this.service.UseDefaultCredentials = true;

    // Create Fields collection
this.PopulateFieldsCollection(this.service.GetObjectType(), this.service.GetFieldsType());

    Application.ScreenUpdating = false;

    bool loadData = this.ListObjects.Count == 0;
if (!loadData)
{
MessageBox.Show(“Spreadsheet loaded from disc”);
}
if (loadData)
{
// Load Data into dataTable from Web Services
this.LoadDataTable();
// Add dataTable to Excel Spreadsheet
this.AddDataTableToExcel();

        // Add dataTable and objects to the Caching collection
this.StartCaching(“dataTable”);
this.StartCaching(“objects”);
}

    Application.ScreenUpdating = true;
}

from just invoking Load() before.

So we populate the fields collection and then we check whether or not there is a ListObject in the document (remember the Controls collection was empty). If this is the case we must do something (for now we just display a messagebox).

If we are called from NAV (loadData becomes true) we will load the data and call AddDataTableToExcel (renamed from AddDataToExcel) and that should work.

If we try to compile now, we will see that the Reload() method uses Load() as well. We need to change Reload to

/// <summary>
/// Reload data from NAV (delete old dataTable, and load new data)
/// </summary>
internal void Reload()
{
Application.ScreenUpdating = false;

    // Remove List Object
if (this.dataTable != null)
this.Controls.RemoveAt(0);
else
this.ListObjects[1].Delete();

    // Load Data into dataTable from Web Services
this.LoadDataTable();
// Add dataTable to Excel Spreadsheet
this.AddDataTableToExcel();

    // If this reload was in fact a reattach of the spreadsheet, start caching dataTable and objects again
if (!this.IsCached(“dataTable”))
{
// Add dataTable and objects to the Caching collection
this.StartCaching(“dataTable”);
this.StartCaching(“objects”);
}

    Application.ScreenUpdating = true;
}

Note that we remove the old listobject in two different ways based on whether or not dataTable is set. dataTable is null if the spreadsheet has been detached from NAV – I will touch more upon that later. This is also the reason why we restart Caching the dataTable and objects if in fact this was a reattach.

The solution should work as before now – the only major difference is, that if you save the spreadsheet on the disc and try to load it again it should not give an exception telling you, that you cannot overlap a table with another table, instead it should give you something like:

image

This is of course not the final solution, but it shows us that we are on the right track.

Restoring the “State” of the spreadsheet

To make a long story short, the lines we need in order to restore the managed list object are:

// Remove Non-VSTO List Object
this.ListObjects[1].Delete();
// Create a new VSTO ListObject – data bound
this.AddDataTableToExcel();

Meaning that we remove the unmanaged ListObject and we add the managed ListObject to Excel, seems pretty easy. But what if the document is saved on disc and you add a field to the Customer Page (+ update your web reference and recompile your NAVTemplate) then the managed extension assembly doesn’t match the saved spreadsheet anymore and the above logic wouldn’t work.

In many cases we could just say that we don’t care – but given the ability to save spreadsheets that are connected to Web Services and reload data adds another dimension to the entire Excel thing. You can have spreadsheets that contain a lot of other things than your dataTable and you might not be pleased with the fact that you loose the NAV Web Service connection if this happens.

I decided to build in a way of determining this and give the user a couple of options:

// If the spreadsheet was detached already – just ignore
if (this.IsCached(“dataTable”))
{
// We have loaded a saved spreadsheet with data
// Check that the VSTO assembly (fields) matches the spreadsheet
bool fieldsOK = this.dataTable.Columns.Count == this.fields.Count;
if (fieldsOK)
{
for (int i = 0; i < this.fields.Count; i++)
if (this.dataTable.Columns[i].Caption != this.fields[i].field)
fieldsOK = false;
}
if (!fieldsOK)
{
// Schema mismatch – cannot link back to NAV
switch (MessageBox.Show(“Customer Card definition has changed since this spreadsheet was save. Do you want to re-establish link, reload data and loose changes?”, “Error”, MessageBoxButtons.YesNoCancel))
{
case DialogResult.Cancel:
// Quit
Application.Quit();
break;
case DialogResult.Yes:
// Remove Non-VSTO List Object
this.ListObjects[1].Delete();
// Signal reload data and reestablish link
loadData = true;
                this.StopCaching(“dataTable”);
this.StopCaching(“objects”);
                break;
case DialogResult.No:
// Detach spreadsheet from NAV
              this.dataTable = null;
              this.objects = null;
              this.StopCaching(“dataTable”);
              this.StopCaching(“objects”);
                break;
}

    }
else
{
// Remove Non-VSTO List Object
this.ListObjects[1].Delete();
// Create a new VSTO ListObject – data bound
this.AddDataTableToExcel();

}
}

3 options – Cancel quits the spreadsheet open command and no harm has been done, Yes removes the old table and sets the loadData to true (meaning that the spreadsheet will reload data as if it was opened from NAV) and No will detatch the spreadsheet from NAV (setting the dataTable to null and stop caching dataTable and objects). Note that if you ever press Reload it will re-attach the spreadsheet and load data from NAV again (the reason for checking whether the dataTable was null in Reload).

Yes or No will of course not touch the original spreadsheet and you can always save in a new name.

BTW – we need one simple check in Save() as well

/// <summary>
/// Save Changes to NAV via Web Service
/// </summary>
internal void Save()
{
if (this.dataTable == null)
{
MessageBox.Show(“Spreadsheet was detached from NAV, cannot perform Save!”);
}
else if (DoSave())
{
Reload();
}
}

The only extra tihing you need is to add the following line right before the return statement in DoSave():

this.dataTable.AcceptChanges();

Without this line you cannot edit the data, save changes to NAV, edit more data and then save locally.

Done

We are done – we now have a template you can use for modifying NAV data. You can save the spreadsheet locally and modify it while you are offline and later, when you come back online you can push your changes to NAV and this is when validation happens.

As usual you can download the project here http://www.freddy.dk/NAVTemplateR2.zip – there are no changes to what’s needed in NAV, so you should use the objects from the original Edit In Excel walkthrough inside C/AL.

BTW. If you decide to use the source, you probably need to right-click the Web References and invoke Update Web Reference to make sure that the Web Reference matches your Customer, Item and Vendor Card Pages.

Part 2?

So why is this a part 1 out of  2 you might ask yourself right now?

Reason is that I want to create a better conflict resolution in the spreadsheet. With the solution we have build now, any change to a record by anybody will cause you to loose your changes in Excel. Wouldn’t it be nice if we were able to determine that we only changed the location code in Excel, so just because somebody changed the credit limit on a couple of customers, we should still be able to re-apply our changes without having to type them in again.

This is what R2 step 2 is all about – a better conflict resolution mechanism.

Enjoy

Freddy Kristiansen
PM Architect
Microsoft Dynamics NAV

Edit In Excel – bug fix and R2

If you haven’t read the 4 step walkthrough of how to Edit In Excel from Microsoft Dynamics NAV, you should do so here this post is a follow up to the original posts.

I have received a number of suggestions to what you could do with the Edit In Excel and a single bug. In this post I will fix the bug and I will explain what R2 is all about.

The Bug

The bug description is, that if you take my binaries and use them on a machine without regional settings = US – you will get an exception (Thanks Waldo).

Whether this is my bug or whether Excel behaves strange I will leave to the reader, but fact is, that if I have created a VSTO template for Excel on a machine with regional settings = US – then you cannot create a spreadsheet based on that template from code if your computer is not setup the same way.

The easiest fix I found to this problem (and now somebody might say that I am a hacker) is to temporarily set the current culture to en-US while opening the spreadsheet, then everything seems to work.

So, change this line:

excelApp.Workbooks.Add(template);

to

// Set the current culture to en-US when adding the template to avoid an exception if running on a non-US computer
System.Globalization.CultureInfo orgCulture = System.Threading.Thread.CurrentThread.CurrentCulture;
System.Threading.Thread.CurrentThread.CurrentCulture = new System.Globalization.CultureInfo(“en-US”);

// Create a new spreadsheet based on the new template
excelApp.Workbooks.Add(template);

// Restore culture in current thread
System.Threading.Thread.CurrentThread.CurrentCulture = orgCulture;

I have updated the original binaries with this fix and they can be downloaded from http://www.freddy.dk/NAVTemplate_Final.zip

R2

One suggestion I have received a number of times is whether it is possible to save the spreadsheet with customers and be able to have a local spreadsheet connected to your NAV that you can load and work with.

So – I set out to fix this – shouldn’t be a biggy (I thought), but it turned out that there are a number of things you need to take into consideration. It isn’t a total rewrite, but it is a pretty significant change, so I decided that it was time for Edit In Excel R2.

Stay tuned – Edit In Excel R2 coming up in a few days.

 

Freddy Kristiansen
PM Architect
Microsoft Dynamics NAV

Integration to Virtual Earth – Part 3 (out of 4)

If you haven’t read Part 1 and Part 2, you should do so before continuing here. In Part 1 we saw how to add geographical information to your customer table and how to connect to the Virtual Earth geocode web service, and in part 2 we created a simple web site showing a map and adding pushpins.

Now we want to add an action to the customer card, showing us the location of the customer and other customers in the surrounding area.

Parameters to the website

The web site we create in part 2 takes 3 optional parameters: Latitude, Longitude and Zoom

If you try to type in

http://localhost/mysite/default.htm?latitude=0&longitude=0&zoom=6

you will get something like this:

image

Which is location 0,0 and a zoom factor of 5.

Now the only thing we need to do in NAV is to create an action, which launches this web site with the location set to the coordinates of the customer and set the zoom factor to e.g. 10. The web site will then connect back to web services and place push pins where the customers are and that part already works.

Adding the action

In the customer card open the site actions and add an action called View Area Map:

image

Switch to code view and add the following line to the action:

HYPERLINK(‘http://localhost/mysite/default.htm?latitude=’+FORMAT(Latitude,0,2)+’&longitude=’+FORMAT(Longitude,0,2)+’&zoom=10&#8217;);

FORMAT(xxx,0,2) ensures that the latitude and longitude are in the correct format and when you launch the action on the Cannon Group you will get:

image

A nice map of the surroundings and if we hover over the customers we can see more information about the customer.

I did not upload the binaries for this to any site – since it really is just adding this one line of code to a new action.

Part 4 of the Virtual Earth Integration will have to wait a little. It uses SP1 features and we are not allowed to blog about SP1 features until the Statement of Direction is published and that should be the case end of March 2009 – so stay tuned for the round-up of Virtual Earth Integration beginning of April 2009.

 

Enjoy

 

Freddy Kristiansen
PM Architect
Microsoft Dynamics NAV