Handling Sales Orders from Page based Web Services – in NAV 2009SP1 (and RTM)

First of all, there isn’t going to be a new post on every single record type on how to handle them from Web Services – but Sales Orders are special and the reason for the “(and SP1)” in the titel refers to the fact, that there are changes between RTM and SP1 or maybe a better way to state it is, that the way you could do it in RTM (that might lead to errors) is no longer possible – so you have to do it right.

Secondly, please read the post about the Web Service Changes in SP1 before reading this post – you can find that post here.

Working with the Sales Orders Page from Web Services in NAV 2009SP1

Just to recap a couple of facts about Page based Web Services – we are using the pages and the code on the pages to work with sales orders, this means that we need to mimic the way the RoleTailored Client works, and the RoleTailored Client doesn’t create the order header and all the lines in one go before writing anything to the database. Instead what really happens is that once you leave the primary key field (the Order No) it creates the Order Header in the table. The same with the lines, they are created and then you type data into them, after which they are updated.

So what we need to do is to create the sales order in 4 steps:

1. Create the Order Header
2. Update the Order Header
3. Create an Order Line
4. Update an Order Line
(repeat steps 3-4)

Now this doesn’t mean that you have to do 2 + (no of Orderlines)*2 roundtrips to the server (fortunately) – but you always need 3 roundtrips.

1. Create the Order Header
2. Update the Order Header and Create all Order Lines
4. Update all Order Lines

meaning that you can create all order lines in one go (together with updating header info) and you can update them all in one go.

a code sample for doing this:

// Create Service Reference
var service = new SalesOrder_Service();
service.UseDefaultCredentials = true;

// Create the Order header
var newOrder = new SalesOrder();
service.Create(ref newOrder);

// Update Order header
newOrder.Sell_to_Customer_No = “10000”;
// Create Order lines
newOrder.SalesLines = new Sales_Order_Line[2];
for (int idx = 0; idx < 2; idx++)
newOrder.SalesLines[idx] = new Sales_Order_Line();
service.Update(ref newOrder);

// Update Order lines
var line1 = newOrder.SalesLines[0];
line1.Type = SalesOrderRef.Type.Item;
line1.No = “LS-75”;
line1.Quantity = 3;
var line2 = newOrder.SalesLines[1];
line2.Type = NewSalesOrderRef.Type.Item;
line2.No = “LS-100”;
line2.Quantity = 3;
service.Update(ref newOrder);

After invoking Create(ref newOrder) or Update(ref newOrder) we get the updated sales order back from NAV, and we know that all the <field>Specified properties are set to true and all strings which has a value are not null – so we can just update the fields we want to update and call update(ref newOrder) again and utilize that SP1 only updates the fields that actually have changed.

This behavior is pretty different from NAV 2009 RTM web services, where it will write all fields to the database if you don’t set strings fields to NULL or set the <field>Specified to false (as described in my previous post).

Making the above code run in NAV 2009RTM

What we really want to do here, is to mimic the behavior of NAV 2009 SP1 in RTM – without having to change the logic.

So I went ahead and wrote two functions. One for making a copy of a record (to be invoked right after your Create(ref xx) or Update(ref xx)) – so we now have a copy of the object coming from NAV. Another function for preparing our object for Update (to be invoked right before calling Update(ref xx)) – to compare our object with the old copy and set all the unchanged fields <field>Specified to false and all unchanged string fields to null.

The two functions are listed towards the end of this post.

Our code from above would then look like:

// Create Service Reference
var service = new SalesOrder_Service();
service.UseDefaultCredentials = true;

// Create the Order header
var newOrder = new SalesOrder();
service.Create(ref newOrder);
SalesOrder copy = (SalesOrder)GetCopy(newOrder);

// Update Order header
newOrder.Sell_to_Customer_No = “10000”;
// Create Order lines
newOrder.SalesLines = new Sales_Order_Line[2];
for (int idx = 0; idx < 2; idx++)
newOrder.SalesLines[idx] = new Sales_Order_Line();
PrepareForUpdate(newOrder, copy);
service.Update(ref newOrder);
copy = (SalesOrder)GetCopy(newOrder);

// Update Order lines
var line1 = newOrder.SalesLines[0];
line1.Type = SalesOrderRef.Type.Item;
line1.No = “LS-75”;
line1.Quantity = 3;
var line2 = newOrder.SalesLines[1];
line2.Type = SalesOrderRef.Type.Item;
line2.No = “LS-100”;
line2.Quantity = 3;
PrepareForUpdate(newOrder, copy);
service.Update(ref newOrder);

and this code would actually run on SP1 as well – and cause smaller packages to be sent over the wire (not that I think that is an issue).

Deleting a line from an existing Sales Order

Now we have seen how to create a Sales Order with a number of lines – but what if you want to delete a line after having saved the Sales Order. On the Service object you will find a method called Delete_SalesLines, which takes a key and delete that Sales Line.

service.Delete_SalesLines(line.Key);

The only caveat to this is, that if you want to do any more work on the Sales Order, you will have to re-read the Sales Order, else you will get an information that somebody changed the record (and that would be you).

So deleting all lines from a Sales Order could be done by:

foreach (Sales_Order_Line line in so.SalesLines)
service.Delete_SalesLines(line.Key);

and then you would typically re-read the Sales Order with the following line:

so = service.Read(so.No);

That wasn’t so bad.

My personal opinion is that we should change the Delete_SalesLines to be:

service.Delete_SalesLines(ref so, line);

Which is why I created a function that does exactly that:

void Delete_SalesLines(SalesOrder_Service service, ref SalesOrder so, Sales_Order_Line line)
{
Debug.Assert(so.SalesLines.Contains<Sales_Order_Line>(line));
service.Delete_SalesLines(line.Key);
so = service.Read(so.No);
}

Note, that Í just re-read the order, loosing any changes you have made to the order or order lines. Another approach here could be to remove the line deleted from the lines collection, but things becomes utterly complicated when we try to mimic a behavior in the consumer that IMO should be on the Server side.

A different approach would be to create a codeunit for deleting lines and expose this as an extension to the page (functions added to the page), but we would gain anything, since we still would have to re-read the order afterwards.

Adding a line to an existing Sales Order

More complicated is it, when we want to add a line to an existing Sales Order through the Sales Order Page.

Actually it isn’t complicated to add the line – but it is complicated to locate the newly added line after the fact to do modifications, because it is still true that you need to add the line first and then modify the line afterwards (and update the order).

Adding the line is:

// Create a new Lines array with only the new line and update (meaning create the line)
so.SalesLines = so.SalesLines.Concat(new [] { new Sales_Order_Line() }).ToArray();
service.Update(ref so);

This add’s a new line to the array of lines and update the order.

After invoking update the newly added line is the last in the array (unless somebody messed around in the app-code and made this assumption false).

My personal opinion is that we should add another method to the service called

service.Add_SalesLines(ref so, ref line);

so that we would have the newly added line available to modify and the so available for service.Update(ref so), which is why I created a function that does exactly that:

Sales_Order_Line AddLine(SalesOrder_Service service, ref SalesOrder so)
{
// Create a new Lines array with only the new line and update (meaning create the line)
so.SalesLines = so.SalesLines.Concat(new [] { new Sales_Order_Line() }).ToArray();
service.Update(ref so);
return so.SalesLines[so.SalesLines.Length-1];
}

Again – If this method existed server side, automatically added by NAV WS, it would be able to do the right thing even though people had mangled in the application logic and change the line numbering sequence or whatever.

A different approach would be to create a codeunit for adding lines and expose this as an extension to the page (functions added to the page). It wouldn’t make the consumers job much easier since we would still have to have any updates to the SalesOrder written before calling the function AND we would have to re-read the sales order after calling the function.

Remember, that if you are using this function from NAV 2009 RTM you might want to consider using PrepareForUpdate before AddLine and GetCopy after AddLine, just as you would do with Update. You could even add another parameter and have that done by the function itself.

Working with other Header/Line objects

Although the samples in this post are using the Sales Orders, the same pattern can be reused for other occurrences of the Header/Line pattern. Just remember that the Header needs to be created first, then you can update the header and create the lines – and last (but not least) you can update the lines.

GetCopy and PrepareForUpdate

Here is a code-listing of GetCopy and PrepareForUpdate – I have tested these functions on a number of different record types and they should work generically

/// <summary>
/// Get a copy of a record for comparison use afterwards
/// </summary>
/// <param name=”obj”>the record to copy</param>
/// <returns>a copy of the record</returns>
object GetCopy(object obj)
{
Type type = obj.GetType();
object copy = Activator.CreateInstance(type);
foreach (PropertyInfo pi in type.GetProperties())
{
if (pi.PropertyType.IsArray)
{
// Copy each object in an array of objects
Array arr = (Array)pi.GetValue(obj, null);
Array arrCopy = Array.CreateInstance(arr.GetType().GetElementType(), arr.Length);
for (int arrIdx = 0; arrIdx < arr.Length; arrIdx++)
arrCopy.SetValue(GetCopy(arr.GetValue(arrIdx)), arrIdx);
pi.SetValue(copy, arrCopy, null);
}
else
{
// Copy each field
pi.SetValue(copy, pi.GetValue(obj, null), null);
}
}
return copy;
}

/// <summary>
/// Prepare record for update
/// Set <field> to null if a string field hasn’t been updated
/// Set <field>Specified to false if a non-string field hasn’t been updated
/// </summary>
/// <param name=”obj”>record to prepare for update</param>
/// <param name=”copy”>copy of the record (a result of GetCopy)</param>
void PrepareForUpdate(object obj, object copy)
{
Debug.Assert(obj.GetType() == copy.GetType());
Type type = obj.GetType();
PropertyInfo[] properties = type.GetProperties();
for(int idx=0; idx<properties.Length; idx++)
{
PropertyInfo pi = properties[idx];
if (pi.Name != “Key” &&
pi.GetCustomAttributes(typeof(System.Xml.Serialization.XmlIgnoreAttribute), false).Length == 0)
{
if (pi.PropertyType.IsArray)
{
// Compare an array of objects – recursively
Array objArr = (Array)pi.GetValue(obj, null);
Array copyArr = (Array)pi.GetValue(copy, null);
for (int objArrIdx = 0; objArrIdx < objArr.Length; objArrIdx++)
{
object arrObj = objArr.GetValue(objArrIdx);
PropertyInfo keyPi = arrObj.GetType().GetProperty(“Key”);
string objKey = (string)keyPi.GetValue(arrObj, null);
for (int copyArrIdx = 0; copyArrIdx < copyArr.Length; copyArrIdx++)
{
object arrCopy = copyArr.GetValue(copyArrIdx);
if (objKey == (string)keyPi.GetValue(arrCopy, null))
PrepareForUpdate(arrObj, arrCopy);
}
}
}
else
{
object objValue = pi.GetValue(obj, null);
if (objValue != null && objValue.Equals(pi.GetValue(copy, null)))
{
// Values are the same – signal no change
if (pi.PropertyType == typeof(string))
{
// Strings doesn’t have a <field>Specified property – set the field to null
pi.SetValue(obj, null, null);
}
else
{
// The <field>Specified is autogenerated by Visual Studio as the next property

idx++;
PropertyInfo specifiedPi = properties[idx];
// Exception if this assumption for some reason isn’t true

Debug.Assert(specifiedPi.Name == pi.Name + “Specified”);
specifiedPi.SetValue(obj, false, null);
}
}
}
}
}
}

That’s it, not as straightforward as you could have wished for, but SP1 definitely makes things easier once it comes out.

And I will be pushing for a better programming model for this in v7.

Enjoy

Freddy Kristiansen
PM Architect
Microsoft Dynamics NAV

Web Services changes in NAV 2009 SP1

NAV 2009 SP1 is being released later this year, so why write about it now?

The main reason is, that NAV 2009 SP1 comes out with a couple of changes, you might want to take into consideration when writing towards NAV 2009 Web Services.

Except for some performance enhancements, the major changes are:

New standard encoding of the Company name

I think one of the questions that has been asked the most is, how does my company name look in the URL: http://server:7047/DynamicsNAV/WS/<company>/

we all know that “CRONUS International Ltd.” becomes “CRONUS_International_Ltd”. Lesser known is it that “CRONUS USA, Inc.” becomes “CROUNS_USA_x002C__Inc” and there are a lot of special cases in this encoding algorithm.

In NAV 2009 SP1 we change to a standard Uri EscapeDataString function, meaning that

CRONUS International Ltd. becomes CRONUS%20International%20Ltd.

CRONUS USA, Inc. becomes CRONUS%20USA%2C%20Inc.

and

CRONUS+,æøå becomes CRONUS%2B%2C%C3%A6%C3%B8%C3%A5

fact is that you actually just can type in the Company name in the URL with the special characters in the browser and it will (in most cases) figure out to select the right company, even in Visual Studio when making your Web Reference (this won’t work if you have / or ? in the company name).

btw you can escape all the characters if you wish to

http://localhost:7047/DynamicsNAV/WS/%43RONUS%20International%20Ltd./Services

is a perfectly good company name – if you prefer Hex over characters.

This change has also affected the return values of the Companies function in SystemService – it now returns the un-encoded company names (= clear text). You can not any longer use the output from the companies function to build your URL – you need to escape the name.

Note: There is no backwards compatibility, trying to access webservices with a URL from NAV 2009 will fail, you need to change the company name encoding.

Schema changes in ReadMultiple and UpdateMultiple

Microsoft Infopath couldn’t modify multiple records using the NAV2009 page based Web Services due to a schema incompatibility. In NAV 2009 SP1 the schema changes for these methods. If you are using proxy based Web Service access (the add service or web reference in Visual Studio) you should just update the reference. If you are using XML Web Services you might have to modify the code used to parse the XML.

I will of course modify the samples on my blog where I use XPath to query the XML.

Updating records in Page based web services only updates the fields that you actually changed

The basics of XML Web Services is, that you send an XML document to a WebServices telling what you want to change. Visual Studio makes it easy to create a reference to a Web Service and get Strongly typed access to f.ex. Customers and Sales Orders through pages.

But how do we tell Web Services which fields actually changed?

For this, Visual Studio autogenerates a <field>Specified boolean property for all non-string fields from NAV and we will change ALL the fields, where <field>Specified is true or where a string is not NULL – NULL in a string value doesn’t mean clear the field, it means don’t update the field.

If you want to clear a field, set the value to String.Empty (“”).

In some cases this have caused problems. Primarily because when you read a customer record in order to change his name, it comes back from the Get function with all <field>Specified set to TRUE and all string fields have content. Changing the Name of a customer – writes the NAME and since the SEARCHNAME is included in the data sent to Web Services that gets updated as well (meaning that NAME and SEARCHNAME could be out of sync).

In NAV 2009 SP1 that has changed. Visual Studio of course still uses <field>Specified and string field <> NULL to determine what comes over the wire, but on the NAV side we only persist what you actually changed, so in NAV 2009 SP1 you can do:

Customer customer = custService.Read(“10000”);
customer.Name = “The Cannon Group, Inc.”;
custService.Update(ref customer);

and it will only update the name of the Customer. In NAV 2009 you would have to either set all the other fields in the customer to NULL or <field>Specified to false in order to get the same behavior – OR you could do like this:

Customer readCustomer = custService.Read(“10000”);
Customer updateCustomer = new Customer();
updateCustomer.Key = readCustomer.Key;
updateCustomer.Name = “The Cannon Group, Inc.”;
custService.Update(ref updateCustomer);

Which also will update only the name (just a small trick, instantiating a new Customer() will have all string fields set to NULL and <field>Specified for other fields set to false – and now we can just set the fields we want to change. Remember setting <field>Specified to true for all non-string fields.).

Note that this will of course work in SP1 as well and the advantage here is that you actually only send the new customer name over the wire to the Web Service.

Changes to how you Create and Updating Sales Orders through a page based Web Service

Actually the way you need to work with Sales Orders in NAV 2009 SP1 through a page based Web Service will also work in NAV 2009 – but the other way around is a problem. In NAV 2009 you could create a sales order with lines with just one call to Web Services, but in reality this didn’t work, you need to do this with a couple of roundtrips.

This is because application code (CurrPage.UPDATE) relies on one kind of transactional behavior (the new order is inserted and committed before any validation trigger starts), but Web Services enforce a different kind (every call to server is wrapped into one atomic transaction that is either committed or rolled back entirely – meaning that insert is not committed until all validation triggers passed).

I will create a special post on how to work with Sales Orders from Web Services – and try to show a way, which works for NAV 2009 SP1 (the same method will work for NAV 2009 as well – so you should consider this early).

Web Services doesn’t change the users default company

A Web Service consumer application would change the users default company in NAV 2009, but since Web Services doesn’t really use the notion of a default company for anything this seemed confusing – and made it impossible for a web service consumer application to call a web service method to request the users default company. In NAV 2009 SP1 – invoking a Web Service method does not change the default company for the user.

Blob fields in Page based Web Services are ignored

In NAV 2009 you couldn’t have a BLOB field on a page (image or whatever), which you exposed as a Web Service.

In NAV 2009 SP1, this has changed. This doesn’t mean that NAV transfers the content of the BLOB to the web service consumer – the field is just ignored.

If you want access to the value of the Blob you will need to write some code, which you can read something about here :

http://blogs.msdn.com/freddyk/archive/2008/11/04/transferring-data-to-from-com-automation-objects-and-webservices.aspx

Codeunits ignores methods with unsupported parameters

Like Pages with unsupported fields (BLOB), also codeunits can be exposed as Web Services even though they contain methods that use parameter types, that are not supported by Web Services. This could be streams, automation, blobs and more.

In SP1 you can connect to NAV Web Services from both PHP and Java

I won’t cover the details in this post, but it should be clear that NAV WebServices are accessible from PHP and Java come SP1. As soon as I have a build, which supports this – I will write a couple of posts on how to do this.

Enjoy

Freddy Kristiansen
PM Architect
Microsoft Dynamics NAV

Installing NAV 2009 without A/D

The majority of installations of NAV 2009 will be in a network environment, you will have a domain server and an Active Directory with your users and your biggest worries will be how to setup a 3T environment on 3 different boxes, getting the delegation setup correctly all of that stuff.

But in a few cases we might run into situations, where there is only a LAN between a number of machines and no server.

In this post I will show what it takes to make one XP computer run the RoleTailored Client (or the Classic) on one XP computer and have another XP computer be the Database Tier, Service Tier – and of course run a client too.

WorkXP1 and WorkXP2

First of all – I have installed two XP computers with XP SP3 and nothing else – no Office, no Visual Studio, no nothing – just plain good old XP.

In both XP’s I have used the Set up a home or small office network wizard to make the computers join a workgroup and be able to see each other

image

The computers are named WorkXP1 and WorkXP2 – and there is one user on each computer XP1 on WorkXP1 and XP2 on WorkXP2.

So what I want to do, is to install the full NAV 2009 on WorkXP1 and only the Clients on WorkXP2 – and make WorkXP2 run with WorkXP1 as the Service Tier and Database tier for the RTC or the Classic. I will also see that I can in fact run Web Services from WorkXP2 as well.

I did not try to install the database tier separate from the Service Tier, as I do not think it is relevant in a scenario like this.

Installing the full NAV2009 on WorkXP1

Note, that I cannot just insert the DVD and press Install Demo, because I don’t have Microsoft Outlook installed, so I select Choose an installation option and then Customize under Server (or Client or Database Components) and select the following options

image

and then just accept the default setup values and install.

Installing the Clients on WorkXP2

on the WorkXP2 we install both clients – Again Choose an Installation option and select customize under Client and Select to run Classic from My Computer.

image

After selecting Next you will need to Click the RoleTailored Client link and specify the Service Tier computer (which is WorkXP1).

image

Installing…

If you wonder how – I am running this experiment as two virtual machines on a Windows 2008 Server running Hyper-V.

image

image

Workgroup networking

As you might have guessed – starting the clients on WorkXP1 just works, and trying the same on WorkXP2 doesn’t.

We need to do 3 things in order to make this work…

1. Create the XP2 user on WorkXP1

The way workgroups works is to authenticate the XP2 user on WorkXP1 using his own Username and Password, so you need to create all the users on the “server”, and note, then need to have the same password.

image

After this you of course need to create the user in NAV (using the Classic Client on WorkXP1) and give him the right permissions (In my world everybody is SUPER).

Having done this, we try to start the Classic Client on the WorkXP2 machine and get the following result:

image

This means that the firewall is blocking access to the SQL Server.

2. Open the firewall for SQLSERVR (if you want to run Classic)

On WorkXP1 open the Control Panel and open Windows Firewall. Click Add Program, browse and locate SQLSERVR.EXE under C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Binn add this to the list of exceptions:

image

At the top of the list of exceptions you will BTW. find DynamicsNAVServer – which means that the RoleTailored actually doesn’t need this setting.

Trying to connect from the Classic Client now will give a different error:

image

The reason for this is that XP by default is running Simple Sharing, meaning that everybody will login as Guest on a different computer and not trying to login with their own Username and Password.

3. Disable Simple Filesharing on WorkXP1

On WorkXP1 in the Control Panel or any explorer window select Tools -> Folder Options, goto the View Tab and remove the checkmark from Use simple file sharing.

image

After having done this – you should be able to connect using the Classic Client and also the RoleTailored Client should now work.

image

 

It is probably not any different than what you would do with NAV 5.0 – but a number of people has asked me whether or not it is possible – and it is.

BTW – Web Services works in exactly the same way. If you start the Web Service listener on WorkXP1 – you can navigate to

http://WorkXP1:7047/DynamicsNAV/WS/CRONUS%20International%20Ltd/Services (this is the SP1 URL)

You might have to open the firewall for port 7047 if it doesn’t pick up that DynamicsNAVServer is listening on both ports.

 

Enjoy

 

Freddy Kristiansen
PM Architect
Microsoft Dynamics NAV

Using LINQ with NAV Web Services

In .NET 3.5 Microsoft released LINQ (Language INtegrated Query) – which is a way of writing select statements strongly typed inside your C# code.

Wouldn’t it be nice if we could leverage this technology when connecting to NAV Web Services?

I thought YES – so I set out to find out what it took…

The goal was to be able to replace code like this:

CustomerRef.Customer_Service cservice = new CustomerRef.Customer_Service();
cservice.UseDefaultCredentials = true;

CustomerRef.Customer_Filter filter = new CustomerRef.Customer_Filter();
filter.Field = CustomerRef.Customer_Fields.Location_Code;
filter.Criteria = “=YELLOW|=BLUE”;
CustomerRef.Customer_Filter[] filters = new CustomerRef.Customer_Filter[] { filter };

CustomerRef.Customer[] customers = cservice.ReadMultiple(filters, null, 0);

foreach (CustomerRef.Customer customer in customers)
{
// do stuff
}

with code like this:

CustomerRef.Customer_Service cservice = new CustomerRef.Customer_Service();
cservice.UseDefaultCredentials = true;

var cquery = from c in new FreddyK.NAVPageQuery<CustomerRef.Customer>(cservice)
where c.Location_Code == “YELLOW” || c.Location_Code == “BLUE”
select c;

foreach (CustomerRef.Customer customer in cquery)
{
// do stuff
}

Which I personally find more readable and has a lot of other advantages:

  1. Strongly typed criterias – if fields change type you get a compile error after updating the Web Reference
  2. Don’t have to think about CultureInfo (NumberFormat and DateFormat) when doing your criteria
  3. Don’t have to duplicate quotes for your search string
  4. Don’t have to convert booleans into Yes/No
  5. Separation of Query and Values in the Query
  6. and probably a lot more…

I always heard that you only needed 3 good reasons for doing something – so this should be more than enough.

Understanding LINQ

Lets look at the above code. The following

CustomerRef.Customer_Service cservice = new CustomerRef.Customer_Service();
cservice.UseDefaultCredentials = true;

initializes the Service (as usually) and

var cquery = from c in new FreddyK.NAVPageQuery<CustomerRef.Customer>(cservice)
where c.Location_Code == “YELLOW” || c.Location_Code == “BLUE”
select c;

creates a query (which contains an expression tree of the where clause) and

foreach (CustomerRef.Customer customer in cquery)
{
// do stuff
}

invokes the query (calls Web Services ReadMultiple) and returns the customers.

FreddyK.NAVPageQuery is a class, which is capable of acting as a LINQ provider for all pages exposed as Web Services.

The NAVPageQuery class is a generic class where you specify the type returned by the service – and you specify the service object as a parameter to the constructor. No need to say, that these two must match, else you will get an exception.

For a class to be usable in a LINQ query it needs to implement IQueryable<type> (which is a combination of IQueryable, IEnumerable<type> and IEnumerable) – and typically you will then have another class, which implements IQueryProvider. In my sample, I have implemented both interfaces in the NAVPageQuery.

Instead of listing the interfaces, I will list what happens when the query is build (when the line with var cquery = is executed):

  1. The NAVPageQuery gets instantiated
  2. LINQ calls the method IQueryProvider IQueryable.Provider to get the IQueryProvider class
  3. LINQ calls the method System.Linq.Expressions.Expression IQueryable.Expression to get the initial expression
  4. Now LINQ builds the expression tree and calls IQueryable<S> IQueryProvider.CreateQuery<S>(Expression expression) from which it expects a IQueryable<type> object.

Now we are done building the query and our variable cquery is set to be of type IQueryable<Customer>

When we then do a foreach on the cquery a little later, the following this happens:

  1. foreach calls into the IEnumerable interface IEnumerator<T> IEnumerable<T>.GetEnumerator() and ask for an enumerator
  2. In this method, I call the Execute method (TResult IQueryProvider.Execute<TResult>(Expression expression)) on the Provider (which is ‘this’)
  3. In the Execute method I run through the Expression tree and builds a number of Filter objects, collect them into an array of filters and call ReadMultiple.

You could argue that it would be a good idea to run through the expression tree while building the query – but the primary reason for NOT doing this is that you could imagine queries like:

decimal amount = 10000;

var cquery = from c in new FreddyK.NAVPageQuery<CustomerRef.Customer>(cservice)
where c.Balance_LCY > amount
select c;

In this case you want to separate building the query and creating the filters, since you could:

foreach (CustomerRef.Customer customer in cquery)
{
// do stuff
}

amount = 100000;
foreach (CustomerRef.Customer customer in cquery)
{
// do other stuff
}

this demonstrates the need for having the ability to decouple the query and the values used by the query and this is the reason for not creating the filter specs while building the query – but keeping the expression tree.

Another sample:

cquery = from c in new FreddyK.NAVPageQuery<CustomerRef.Customer>(cservice)
where c.Last_Date_Modified >= DateTime.Today.AddDays(-5)
select c;

At the time of execution, this query will give you customers modified the last 5 days (imagine building that one with filters easy)

Important things to notice

It is important to realize that NAVPageQuery doesn’t give you any functionality you didn’t have before. With the original Filter class you cannot OR things together (unless it is the same field) – you also cannot do that using LINQ (towards NAV Web Services)

If you try to type in a where clause like

where c.Location_Code == “YELLOW” || c.Country_Region_Code == “US”

It will fail – intentionally!

Of course I could have done client side filtering – but this would only fool the developer, since you would take the performance hit when calling ReadMultiple and you wouldn’t really notice – I wanted the developer to be aware when he makes clever or not so clever choices.

So – the where clause needs to be simple enough so that I can convert it into a filter specification.

The error will be thrown when you try to use the query and it will typically be an exception stating “Query too complex” (even though it might not be complex at all).

Another thing to notice is that AND’s and OR’s cannot be combined (as this isn’t possible in the filter spec):

where c.Location_Code == “YELLOW” && (c.Balance_LCY > amount || c.Location_Code == “BLUE”)

is not allowed, but

where c.Balance_LCY > amount && (c.Location_Code == “YELLOW” || c.Location_Code == “BLUE”) && c.No != “”

works just fine.

What about updating, deleting and creating?

Nothing new there – you still have your service object and the Customer objects that are returned from the LINQ query is the same object you would get from reading through Readmultiple directly.

This class is only designed to make queries towards web services easier and more error safe.

That’s all good – but how is it made?

For once, I am not going to go through how NAVPageQuery is created in detail – I have commented the source code pretty detailed and if you need to understand how it is build, you can look at the sourcecode combined with the information on how things are executed (from above) and it should be possible to decrypt the source:-)

If you download the solution .zip file here, you will find a class library with one class (NAVPageQuery) and a Console application called TestQuery. The console app. can be deleted if you don’t want it and it has a number of samples – some of which are listed above.

When downloading the test app. – you need to update the web references (so that they match your exposed Pages) – I know that I added the Last_Date_Modified to the Customer Page in order to have access to that field through web services – there might be more.

Questions and comments are welcome, I prefer to answer questions on mibuso instead of here – since it typically makes the answer available to more people.

Enjoy

Freddy Kristiansen
PM Architect
Microsoft Dynamics NAV

Edit In Excel R2 – Part 1 (out of 2)

This post assumes that you have read the 4 step walkthrough of how to build the Edit In Excel demo from November 2008. You can find the parts here: Part 1, Part 2, Part 3, Part 4 and the Bug Fix.

In this post I will talk about what is needed in order to be able to save an Excel spreadsheet on a local disc, edit it offline and then submit your changes later.

My first assumption was that this was just a few small changes, but I should learn more…

Goal

The success scenario is the one where a NAV user decides to take the customer table offline in Excel to do modifications. The Excel spreadsheet is saved to a local hard drive and edited while not connected to the network.

When the user reconnects to the network, he can submit his changes, which will be send to NAV through Web Services and validated according to business rules.

The location of NAVTemplate.vsto

Let’s just try to save the spreadsheet in our Documents folder and see what happens if we open the spreadsheet. Not surprisingly we get an error telling us that it was unable to locate NAVTemplate.vsto

image

An immediate solution is to save the spreadsheet next to the .vsto and then it seems to work better, but the spreadsheet is not attached to NAV anymore, we cannot save changes and reload crashes with an exception.

The .vsto file is our deployment manifest, which is our managed code extension to the spreadsheet and of course this is needed in order for the spreadsheet to work.

You can read more about the architecture of Document-Level Customizations here:

http://msdn.microsoft.com/en-us/library/zcfbd2sk.aspx

Looking at the ServerDocument interface,which we use in the NAVEditInExcel solution, it has a property called DeploymentManifestUri, which is the location of the .vsto file. Adding the following line to the NAVEditInExcel project

serverDoc.DeploymentManifestUrl = new Uri(System.IO.Path.Combine(System.IO.Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location), “NAVTemplate.vsto”));

will cause the document to be created with an absolute reference to the NAVTemplate.vsto, and this will solve the problem with the spreadsheet not being able to locate the .vsto. In fact if this is a network location, it should even be possible to send the Excel spreadsheet to somebody else who should be able to modify the document and post the changes.

When doing this I decided to make another change in the NAVEditInExcel project. As you know it creates a temporary template in the same location as the DLL and the VSTO – and then it deletes it again. This is really not the preferred location of temporary files – we should create temporary files in the Temp folder so we change the line setting the template name to:

string template = System.IO.Path.Combine(System.IO.Path.GetTempPath(), page + “.xltx”);

since we don’t need to have the .xltx next to the .vsto anymore.

What do we need to save?

Members of the Sheet1 class marked with [Cached] will be saved together with the document – we know that from the way we transfer the page name and the current view to the spreadsheet. Thinking about it, the only things we need to save together with the document is the dataTable (which at any time contains the changes made by the user) and the objects collection (which are the objects returned from the Web Service).

The DataTable class implements IXmlSerializable and the objects collection is (as we know) an array of objects returned from a web Service provider and since these objects where send over the wire from a Web Service they of course also implements IXmlSerializable.

The fields collection cannot be saved, since the NAVFieldInfo class uses the PropertyInfo class, which cannot be serialized. The Service connection of course cannot be serialized either – nor can the datalist class – as the matter of fact, the datalist class shouldn’t be a member at all, it should be moved to the AddDataToExcel method as a local variable.

Problem now is, that if we just mark the dataTable and objects members with [Cached] we need to initialize them in the NAVEditInExcel project and there is no way we can instantiate the objects collection at that time with the right type.

A little more on cached data objects in Office documents

http://msdn.microsoft.com/en-us/library/ms178808(VS.80).aspx

and then how to programmatically add members to the cached objects

http://msdn.microsoft.com/en-us/library/48b7eyf3(VS.80).aspx

using this knowledge gives us the following two lines, we want to add after we have instantiated the dataTable and the objects array.

// Add dataTable and objects to the Caching collection
this.StartCaching(“dataTable”);
this.StartCaching(“objects”);

Now we just need to determine that the spreadsheet was loaded from a file (and not started from NAV) and then act differently.

Refactoring some code

We need to do a little refactoring of code in order to make things work. In the existing solution the PopulateFieldsCollection method creates the fields collection, but it also creates an empty dataTable class. Since we now store the dataTable class and not the fields collection we need as the first thing in the new spreadsheet to create the fields collection (a lot of things is depending on this). This is the new PopulateFieldsCollection (and AddField):

/// <summary>
/// Populate Fields Collection with NAVFieldInfo for all properties in the record
/// Should works with any NAV 2009 Page exposed as WebService
/// </summary>
/// <param name=”objType”>Type of Object (typeof(Customer), typeof(Vendor), …)</param>
/// <param name=”fieldsType”>Type of the Enum holding the property names</param>
private void PopulateFieldsCollection(Type objType, Type fieldsType)
{
this.fields = new List<NAVFieldInfo>();

    // Key property is not part of the Enum
// Add it manually as the first field
AddField(“Key”, objType);

    // Run through the enum and add all fields
foreach (string field in Enum.GetNames(fieldsType))
{
AddField(field, objType);
}
}

/// <summary>
/// Create a NAVFieldInfo for a field
/// </summary>
/// <param name=”field”>Field name</param>
/// <param name=”objType”>Type of Object in which the field is (typeof(Customer), typeof(Vendor), …)</param>
private void AddField(string field, Type objType)
{
field = NAVFilterHelper.VSName(field);
PropertyInfo pi = objType.GetProperty(field, System.Reflection.BindingFlags.Public | System.Reflection.BindingFlags.Instance);
if (pi != null)
{
NAVFieldInfo nfi = NAVFieldInfo.CreateNAVFieldInfo(field, pi, objType.Namespace);
if (nfi != null)
{
// If we encounter unknown Field Types, they are just ignored
this.fields.Add(nfi);
}
}
}

The Load function in the old spreadsheet does a number of things. It populates the Fields collection, loads the data, populates the dataTable and add the dataTable to the spreadsheet.

We will remove and create one function called LoadDataTable, which will create a dataTable (based on the fields collection), load the data from Web Services and populate the dataTable.

/// <summary>
/// Load Records from NAV via Web Services
/// </summary>
private void LoadDataTable()
{
// Create Data Table object based on fields collection
this.dataTable = new DataTable(this.page);
foreach (NAVFieldInfo nfi in this.fields)
{
this.dataTable.Columns.Add(new DataColumn(nfi.field, nfi.fieldType));
}

    SetFilters(this.view);
this.objects = this.service.ReadMultiple();
// Populate dataTable with data
foreach (object obj in this.objects)
{
DataRow dataRow = this.dataTable.NewRow();
foreach (NAVFieldInfo nfi in this.fields)
{
dataRow[nfi.field] = nfi.GetValue(obj);
}
this.dataTable.Rows.Add(dataRow);
}
this.dataTable.AcceptChanges();
}

As you can see, this is pieces of the PopulateFieldsCollection, Load and PopulateDataTable functions – as the matter of fact, you can delete the PopulateDataTable function as well. BTW the AcceptChanges call was moved from AddDataToExcel it needs to be together with the code populating the dataTable.

Right now my startup code in my spreadsheet has changed to

if (this.service != null)
{
this.service.UseDefaultCredentials = true;

    // Create Fields collection
this.PopulateFieldsCollection(this.service.GetObjectType(), this.service.GetFieldsType());

    Application.ScreenUpdating = false;

    bool loadData = this.ListObjects.Count == 0;
if (!loadData)
{
MessageBox.Show(“Spreadsheet loaded from disc”);
}
if (loadData)
{
// Load Data into dataTable from Web Services
this.LoadDataTable();
// Add dataTable to Excel Spreadsheet
this.AddDataTableToExcel();

        // Add dataTable and objects to the Caching collection
this.StartCaching(“dataTable”);
this.StartCaching(“objects”);
}

    Application.ScreenUpdating = true;
}

from just invoking Load() before.

So we populate the fields collection and then we check whether or not there is a ListObject in the document (remember the Controls collection was empty). If this is the case we must do something (for now we just display a messagebox).

If we are called from NAV (loadData becomes true) we will load the data and call AddDataTableToExcel (renamed from AddDataToExcel) and that should work.

If we try to compile now, we will see that the Reload() method uses Load() as well. We need to change Reload to

/// <summary>
/// Reload data from NAV (delete old dataTable, and load new data)
/// </summary>
internal void Reload()
{
Application.ScreenUpdating = false;

    // Remove List Object
if (this.dataTable != null)
this.Controls.RemoveAt(0);
else
this.ListObjects[1].Delete();

    // Load Data into dataTable from Web Services
this.LoadDataTable();
// Add dataTable to Excel Spreadsheet
this.AddDataTableToExcel();

    // If this reload was in fact a reattach of the spreadsheet, start caching dataTable and objects again
if (!this.IsCached(“dataTable”))
{
// Add dataTable and objects to the Caching collection
this.StartCaching(“dataTable”);
this.StartCaching(“objects”);
}

    Application.ScreenUpdating = true;
}

Note that we remove the old listobject in two different ways based on whether or not dataTable is set. dataTable is null if the spreadsheet has been detached from NAV – I will touch more upon that later. This is also the reason why we restart Caching the dataTable and objects if in fact this was a reattach.

The solution should work as before now – the only major difference is, that if you save the spreadsheet on the disc and try to load it again it should not give an exception telling you, that you cannot overlap a table with another table, instead it should give you something like:

image

This is of course not the final solution, but it shows us that we are on the right track.

Restoring the “State” of the spreadsheet

To make a long story short, the lines we need in order to restore the managed list object are:

// Remove Non-VSTO List Object
this.ListObjects[1].Delete();
// Create a new VSTO ListObject – data bound
this.AddDataTableToExcel();

Meaning that we remove the unmanaged ListObject and we add the managed ListObject to Excel, seems pretty easy. But what if the document is saved on disc and you add a field to the Customer Page (+ update your web reference and recompile your NAVTemplate) then the managed extension assembly doesn’t match the saved spreadsheet anymore and the above logic wouldn’t work.

In many cases we could just say that we don’t care – but given the ability to save spreadsheets that are connected to Web Services and reload data adds another dimension to the entire Excel thing. You can have spreadsheets that contain a lot of other things than your dataTable and you might not be pleased with the fact that you loose the NAV Web Service connection if this happens.

I decided to build in a way of determining this and give the user a couple of options:

// If the spreadsheet was detached already – just ignore
if (this.IsCached(“dataTable”))
{
// We have loaded a saved spreadsheet with data
// Check that the VSTO assembly (fields) matches the spreadsheet
bool fieldsOK = this.dataTable.Columns.Count == this.fields.Count;
if (fieldsOK)
{
for (int i = 0; i < this.fields.Count; i++)
if (this.dataTable.Columns[i].Caption != this.fields[i].field)
fieldsOK = false;
}
if (!fieldsOK)
{
// Schema mismatch – cannot link back to NAV
switch (MessageBox.Show(“Customer Card definition has changed since this spreadsheet was save. Do you want to re-establish link, reload data and loose changes?”, “Error”, MessageBoxButtons.YesNoCancel))
{
case DialogResult.Cancel:
// Quit
Application.Quit();
break;
case DialogResult.Yes:
// Remove Non-VSTO List Object
this.ListObjects[1].Delete();
// Signal reload data and reestablish link
loadData = true;
                this.StopCaching(“dataTable”);
this.StopCaching(“objects”);
                break;
case DialogResult.No:
// Detach spreadsheet from NAV
              this.dataTable = null;
              this.objects = null;
              this.StopCaching(“dataTable”);
              this.StopCaching(“objects”);
                break;
}

    }
else
{
// Remove Non-VSTO List Object
this.ListObjects[1].Delete();
// Create a new VSTO ListObject – data bound
this.AddDataTableToExcel();

}
}

3 options – Cancel quits the spreadsheet open command and no harm has been done, Yes removes the old table and sets the loadData to true (meaning that the spreadsheet will reload data as if it was opened from NAV) and No will detatch the spreadsheet from NAV (setting the dataTable to null and stop caching dataTable and objects). Note that if you ever press Reload it will re-attach the spreadsheet and load data from NAV again (the reason for checking whether the dataTable was null in Reload).

Yes or No will of course not touch the original spreadsheet and you can always save in a new name.

BTW – we need one simple check in Save() as well

/// <summary>
/// Save Changes to NAV via Web Service
/// </summary>
internal void Save()
{
if (this.dataTable == null)
{
MessageBox.Show(“Spreadsheet was detached from NAV, cannot perform Save!”);
}
else if (DoSave())
{
Reload();
}
}

The only extra tihing you need is to add the following line right before the return statement in DoSave():

this.dataTable.AcceptChanges();

Without this line you cannot edit the data, save changes to NAV, edit more data and then save locally.

Done

We are done – we now have a template you can use for modifying NAV data. You can save the spreadsheet locally and modify it while you are offline and later, when you come back online you can push your changes to NAV and this is when validation happens.

As usual you can download the project here http://www.freddy.dk/NAVTemplateR2.zip – there are no changes to what’s needed in NAV, so you should use the objects from the original Edit In Excel walkthrough inside C/AL.

BTW. If you decide to use the source, you probably need to right-click the Web References and invoke Update Web Reference to make sure that the Web Reference matches your Customer, Item and Vendor Card Pages.

Part 2?

So why is this a part 1 out of  2 you might ask yourself right now?

Reason is that I want to create a better conflict resolution in the spreadsheet. With the solution we have build now, any change to a record by anybody will cause you to loose your changes in Excel. Wouldn’t it be nice if we were able to determine that we only changed the location code in Excel, so just because somebody changed the credit limit on a couple of customers, we should still be able to re-apply our changes without having to type them in again.

This is what R2 step 2 is all about – a better conflict resolution mechanism.

Enjoy

Freddy Kristiansen
PM Architect
Microsoft Dynamics NAV

Edit In Excel – bug fix and R2

If you haven’t read the 4 step walkthrough of how to Edit In Excel from Microsoft Dynamics NAV, you should do so here this post is a follow up to the original posts.

I have received a number of suggestions to what you could do with the Edit In Excel and a single bug. In this post I will fix the bug and I will explain what R2 is all about.

The Bug

The bug description is, that if you take my binaries and use them on a machine without regional settings = US – you will get an exception (Thanks Waldo).

Whether this is my bug or whether Excel behaves strange I will leave to the reader, but fact is, that if I have created a VSTO template for Excel on a machine with regional settings = US – then you cannot create a spreadsheet based on that template from code if your computer is not setup the same way.

The easiest fix I found to this problem (and now somebody might say that I am a hacker) is to temporarily set the current culture to en-US while opening the spreadsheet, then everything seems to work.

So, change this line:

excelApp.Workbooks.Add(template);

to

// Set the current culture to en-US when adding the template to avoid an exception if running on a non-US computer
System.Globalization.CultureInfo orgCulture = System.Threading.Thread.CurrentThread.CurrentCulture;
System.Threading.Thread.CurrentThread.CurrentCulture = new System.Globalization.CultureInfo(“en-US”);

// Create a new spreadsheet based on the new template
excelApp.Workbooks.Add(template);

// Restore culture in current thread
System.Threading.Thread.CurrentThread.CurrentCulture = orgCulture;

I have updated the original binaries with this fix and they can be downloaded from http://www.freddy.dk/NAVTemplate_Final.zip

R2

One suggestion I have received a number of times is whether it is possible to save the spreadsheet with customers and be able to have a local spreadsheet connected to your NAV that you can load and work with.

So – I set out to fix this – shouldn’t be a biggy (I thought), but it turned out that there are a number of things you need to take into consideration. It isn’t a total rewrite, but it is a pretty significant change, so I decided that it was time for Edit In Excel R2.

Stay tuned – Edit In Excel R2 coming up in a few days.

 

Freddy Kristiansen
PM Architect
Microsoft Dynamics NAV

Integration to Virtual Earth – Part 3 (out of 4)

If you haven’t read Part 1 and Part 2, you should do so before continuing here. In Part 1 we saw how to add geographical information to your customer table and how to connect to the Virtual Earth geocode web service, and in part 2 we created a simple web site showing a map and adding pushpins.

Now we want to add an action to the customer card, showing us the location of the customer and other customers in the surrounding area.

Parameters to the website

The web site we create in part 2 takes 3 optional parameters: Latitude, Longitude and Zoom

If you try to type in

http://localhost/mysite/default.htm?latitude=0&longitude=0&zoom=6

you will get something like this:

image

Which is location 0,0 and a zoom factor of 5.

Now the only thing we need to do in NAV is to create an action, which launches this web site with the location set to the coordinates of the customer and set the zoom factor to e.g. 10. The web site will then connect back to web services and place push pins where the customers are and that part already works.

Adding the action

In the customer card open the site actions and add an action called View Area Map:

image

Switch to code view and add the following line to the action:

HYPERLINK(‘http://localhost/mysite/default.htm?latitude=’+FORMAT(Latitude,0,2)+’&longitude=’+FORMAT(Longitude,0,2)+’&zoom=10&#8217;);

FORMAT(xxx,0,2) ensures that the latitude and longitude are in the correct format and when you launch the action on the Cannon Group you will get:

image

A nice map of the surroundings and if we hover over the customers we can see more information about the customer.

I did not upload the binaries for this to any site – since it really is just adding this one line of code to a new action.

Part 4 of the Virtual Earth Integration will have to wait a little. It uses SP1 features and we are not allowed to blog about SP1 features until the Statement of Direction is published and that should be the case end of March 2009 – so stay tuned for the round-up of Virtual Earth Integration beginning of April 2009.

 

Enjoy

 

Freddy Kristiansen
PM Architect
Microsoft Dynamics NAV

Integration to Virtual Earth – Part 2 (out of 4)

If you haven’t read Part 1, you should do so before continuing here. In Part 1 we saw how to add geographical information to your customer table and how to connect to the Virtual Earth geocode web service. Now we want to put this information to play.

First of all, we all know Virtual Earth (if not – try it at http://maps.live.com). Less commonly known is it, that the map control used in maps.live.com actually is available for public usage in other web pages.

The Virtual Earth Interactive SDK

If you navigate to http://dev.live.com/virtualearth/sdk/ you will get a menu in the left side looking like this:

image

and a map looking like this:

image

(depending on where in the world you are)

Try looking at the different tabs and you can see the source code for getting a map like this – and you can select other menu items.

Try clicking at the menu item for “Show a specific map”. The map now shows the space needle in Seattle and if you click the source code tab you will see the code for creating that map:

<!DOCTYPE html PUBLIC “-//W3C//DTD XHTML 1.0 Transitional//EN” “http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd”&gt;
<html>
<head>
<title></title>
<meta http-equiv=”Content-Type” content=”text/html; charset=utf-8″>
http://dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=6.2

var map = null;
function GetMap()
{
map = new VEMap(‘myMap’);
map.LoadMap(new VELatLong(47.6, -122.33), 10 ,’h’ ,false);
}  

</head>
<body onload=”GetMap();”>

</body>
</html>

So – HTML and Javascript again. A few things which are important to know is that this line:

http://dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=6.2

Imports the mapcontrol library from the Virtual Earth Web Site and allows you to use all the controls and functions exposed by this library (like a C# reference and using statement).


var map = null;
function GetMap()
{
map = new VEMap(‘myMap’);
map.LoadMap(new VELatLong(47.6, -122.33), 10 ,’h’ ,false);
}  

Is the actual code – for manipulating the map. 47.6, –122.33 should be latitude and longitude for the Space Needle, 10 is the Zoom level, ‘h’ is the style (h==hybrid) if you click the reference tab on the dev sdk window you can find the documentation to LoadMap and other stuff.

Turns out that the Space Needle is not exactly in that position – the right position would be 47.620564, -122.349577 – anyway you get the picture,

The instantiation of the Map also tells you where on the web site, the map should be placed (‘myMap’) which references a

area in the body:

 

 

This

area specifies the location, flow definition and the size of the control.

The last thing we need to know is how our function (GetMap) in Javascript is executed – and that happens as an event handler to the onload event on the body tag.

 

Note that Javascript is executed on the Client and there is no serverside code in this example.

Knowing that we can communicate with NAV Web Services from Javascript (as explained in this post) we should be able to create a site, which loads a map and displays pushpins for all our customers – more about this in a second…

Deployment

Knowing that there are a million different ways to deploy web sites I will not go into detail about how this should be done – but only explain how it can be done (aka what I did on my laptop).

What I did was to install IIS (Internet Information Server) on my laptop. After this I fired up Visual Studio, selected File –> New –> Web Site and specified that I wanted to create an Empty Web Site called http://localhost/mysite

In the empty web site I right clicked on the project and selected add new item, selected HTML page and called it default.htm:

image

C# / Visual Basic choice really didn’t matter as I wasn’t going to make any server side code.

In this default.htm I replaced the content with the following HTML

http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd&#8221;>

/fonta%20href=

var map = null;
function GetMap()
{
map = new VEMap(‘myMap’);
map.LoadMap(new VELatLong(47.620564, -122.349577), 16 ,’h’ ,false);
}

</body>
</html>

and when I open the site in my browser i get:

image

Voila, the Space Needle.

So far so good and if you succeeded in showing this map in a browser from your own Web Site – we know that all the basics are in place.

What’s needed in NAV

Nothing comes for free – and just because we added the latitude and longitude to the customer table doesn’t mean that we can query on them.

We need a function in NAV, exposed as a Web Service, which we can call and ask for all customers in a rectangle of the Earth. Now sceptics might say that there are no such thing as a rectangle of the earth – but programming towards the Virtual Earth API – there is. For returning the customers through Web Services I have created a XML port:

image

and the two lines of Code behind this

Customer – Export::OnAfterGetRecord()
ref.GETTABLE(“<Customer>”);
_Bookmark := FORMAT(ref.RECORDID,0,10);

for making sure that the bookmark is correctly formatted.

The function that we want to expose as a webservice looks like this:

GetCustomersWithin(latitude1 : Decimal;latitude2 : Decimal;longitude1 : Decimal;longitude2 : Decimal;VAR result : XMLport CustomerLocation)
customers.SETRANGE(Latitude, latitude1, latitude2);
customers.SETRANGE(Longitude, longitude1, longitude2);
result.SETTABLEVIEW(customers);

So this is the reason why it is good to remember to have a key on the latitude and longitude fields in the customer table.

I just added this function to the NavMaps codeunit and made sure that the other function (from post 1) is private – meaning that this function is the only function exposed and you need to expose the NavMaps codeunit as a web service in the Web Service table

image

I called it Maps – and this is used in the code below.

Combining things

In the menu you can find code for how to add shapes (pushpins and other things). You can also find code for how to subscribe to events and we are going to do this for two of the events from the mapcontrol (onendzoom and onendpan) as they change the current viewing rectangle of the control.

<!DOCTYPE html PUBLIC “-//W3C//DTD XHTML 1.0 Transitional//EN” “http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd”>
<html>
<head>
<title></title>
<meta http-equiv=”Content-Type” content=”text/html; charset=utf-8″ />
/fonta%20href=

var map = null;
var defaultURL = “http://localhost:7047/DynamicsNAV/WS/CRONUS_International_Ltd/&#8221;;
    var XMLPortResultNS = ‘urn:microsoft-dynamics-nav/xmlports/customerlocation’;
var XMLPortResultNode = ‘Customer’;
var resultSet;

    // Event handler for endzoom event
function EndZoomHandler(e) {
UpdatePushPins();
}

    // Event handler for endpan event
function EndPanHandler(e) {
UpdatePushPins();
}

    // Initialize the map
function GetMap() {
map = new VEMap(‘myMap’);

        // center and zoom are passable as parameters
var latitude = parseFloat(queryString(“latitude”, “0”));
var longitude = parseFloat(queryString(“longitude”, “0”));
var zoom = parseInt(queryString(“zoom”, “2”));

        // use normal dashboard
map.SetDashboardSize(VEDashboardSize.Normal);
// load the map
var position = new VELatLong(latitude, longitude);
map.LoadMap(position, zoom, ‘r’, false);
// hook events
map.AttachEvent(“onendzoom”, EndZoomHandler);
map.AttachEvent(“onendpan”, EndPanHandler);
// Place pushpins
UpdatePushPins();
}

    // Update all pushpins on the map
function UpdatePushPins() {
map.DeleteAllShapes();
// Get the view rectangle
var botlft = map.PixelToLatLong(new VEPixel(1, myMap.offsetHeight-1));
var toprgt = map.PixelToLatLong(new VEPixel(myMap.offsetWidth-1, 1));

        // Get customers within rectangle
GetCustomersWithin(botlft.Latitude, toprgt.Latitude, botlft.Longitude, toprgt.Longitude);
if (resultSet != null) {
i = 0;
while (i                 var shape = new VEShape(VEShapeType.Pushpin, new VELatLong(resultSet[i].childNodes[2].text, resultSet[i].childNodes[3].text));
shape.SetTitle(resultSet[i].childNodes[0].text + ” ” + resultSet[i].childNodes[1].text);
shape.SetDescription(”

” +

” +

” +

” +

Contact ” + resultSet[i].childNodes[5].text + “
Phone ” + resultSet[i].childNodes[6].text + “ Sales ” + resultSet[i].childNodes[7].text + “ Profit ” + resultSet[i].childNodes[8].text + “ Open Customer Page

“);
map.AddShape(shape);
i++;
}
}
}
// Helper function for querying parameters to the site
function queryString(parameter, defaultvalue) {
var loc = location.search.substring(1, location.search.length);
var param_value = false;
var params = loc.split(“&”);
for (i = 0; i             param_name = params[i].substring(0, params[i].indexOf(‘=’));
if (param_name == parameter) {
param_value = params[i].substring(params[i].indexOf(‘=’) + 1)
}
}
if (param_value) {
return param_value;
}
else {
return defaultvalue;
}
}

    // Get Base URL
function GetBaseURL() {
return defaultURL;
}

    // Get Customers within specified rectangle by connecting to NAV WebService
function GetCustomersWithin(latitude1, latitude2, longitude1, longitude2) {
resultSet = null;
try {
// Instantiate XMLHTTP object
xmlhttp = new ActiveXObject(“Msxml2.XMLHTTP.6.0”);
xmlhttp.open(“POST”, GetBaseURL() + “Codeunit/Maps”, false, null, null);
xmlhttp.setRequestHeader(“Content-Type”, “text/xml; charset=utf-8”);
xmlhttp.setRequestHeader(“SOAPAction”, “GetCustomersWithin”);

            // Setup event handler when readystate changes
xmlhttp.onreadystatechange = function() {
// Inline function for handling response
if ((xmlhttp.readyState == 4) && (xmlhttp.Status == 200)) {
var xmldoc = xmlhttp.ResponseXML;
xmldoc.setProperty(‘SelectionLanguage’, ‘XPath’);
xmldoc.setProperty(‘SelectionNamespaces’, ‘xmlns:tns=”‘ + XMLPortResultNS + ‘”‘);
resultSet = xmldoc.selectNodes(‘//tns:’ + XMLPortResultNode);
}
}
// Send request
xmlhttp.Send(‘http://schemas.xmlsoap.org/soap/envelope/&#8221;
>’ + latitude1 + ” + latitude2 + ” + longitude1 + ” + longitude2 + ”);
}
catch (e) {
alert(e.message);
}
}


</head>
<body onload=”GetMap();” style=”margin:0; width:100%; height:100%; overflow: hidden”>

</body>
</html>

I will let the code speak for itself and direct attention to some of my other Javascript posts and/or information generally available on Javascript on the Internet.

image

The defaultURL variable in the start should be set to the URL of your web services listener and if you want to have more fields when you hover over a customer on the map, you will need to add these fields to the customerLocation XML port and add them to the SetDescription call in the code.

I have only tried this with the demo data – and of course you could imagine that if you have a very high number of customers the pushpin placement routine would be too slow. If this is the case, we can either limit the rectangle (if it spans too much – then ignore) or we can check the size of the returned and only place pushings for the firs xx number.

Epilog

When I said nothing comes for free, it wasn’t entirely true – this blog post is free, the usage of the information in this post is free include code snippets etc. – so some things are free:-)

I realize that this was a LOT of non-NAV code – but I do think it opens up some very compelling scenarios for integration when we combine the strength of different products using Web Services.

As always the NAV objects and the default.htm from above is available for download here.

In step 3 I will show how we can add an action to the Customer Card for showing the area map for a given customer – and you probably already know how to do this, if you have followed this post carefully.

 

Enjoy

 

Freddy Kristiansen
PM Architect
Microsoft Dynamics NAV

More SSD testing

If you haven’t read my post about Running Microsoft Dynamics NAV on SSD’s – you should do so first.

After having posted the initial results, I was contacted by other vendors of SSD’s wanting to see whether we could do some additional testing on other hardware. In the interest of the end user, I accepted and once more allocated a slot in the performance lab in Copenhagen.

The new drives to test were:

  • STEC 3½” Zeus IOPS SSD 146GB
  • STEC 2½” MACH8 IOPS SSD 100GB
  • Intel 2½” SSDSA2SH032G1GN 32GB (actually 32GB wasn’t enough for the testing so we took two of those and striped them)

All of these drives looks like standard HDD’s with a SATA interface. Installation is plug and play and no driver installation.

Disclaimer

Remember that the tests we have run here are scenario tests, designed to measure performance deltas on Microsoft Dynamics NAV to make sure that a certain build of NAV doesn’t suddenly get way slower than the previous version and gets shipped with poor performance.

Also again – I haven’t optimized the SQL server at all when doing these tests so you might not see the same performance gain if you switch your drives to SSD’s – or you might see more performance gain (if you know how to optimize for these things).

My testing is ONLY replacing a standard HDD (Seagate Barracuda 500GB, 7200 RPM SATA) with a SSD – and test the same scenarios.

The scenarios are being run for 180minutes each and the perf. testing starts after a 10 minutes warm up time. The tests I will be referring to here are all done simulating 50 users towards a service tier.

Final build of NAV 2009

Between the time of the prior tests and the new tests we released the final version of NAV – so the tests in this post will be based on the RTM version of NAV. Also we got some new Lab equipment – so in order to be honest to the FusionIO tests – we retook all of these tests as well.

In the new list of tests you will see 5 results: HDD, FusionIO, STEC5 (3½”), STEC3 (2½”), Intel

I will use the same tests as in the original post.

Here we go

clip_image002

As we can see if we compare this test to the test on the pre-release – this test is faster on the HDD than the prior test was on the FusionIO SSD.

This of course also means that the performance gain by using SSD’s in this test is smaller – but still a 21% performance enhancement by changing the drive to SSD isn’t bad at all.

clip_image002

Again 25% performance enhancement by changing to SSD’s and the difference between the different types of SSD’s is insignificant in comparison.

clip_image002[6]

Again 25-30% performance enhancement by changing to SSD’s and not a huge difference between the technologies.

Note, that as the numbers get lower – the measurement uncertainty can play a role in the results.

clip_image002[8]

This test is around 10 times faster than when we did the test on the pre-release and now the results are so fast that the uncertainty causes some results as being skyhigh. Analyzing the results actually reveals that there isn’t that much of a difference.

clip_image002[10]

Same picture again – significant performance enhancement changing to SSD’s – not a huge difference between the technologies.

Wrap-up

Test results where not as clear as the last time – primarily because the RTM version solved some of the perf. problems and due to new hardware in the Perf. lab – but still tests show 20-30% performance increase.

I still think SSD’s are here to stay and I do think that people can take advantage of the increased performance they will get simply by changing the drives in their SQL Server. I haven’t tested what performance enhancements you would get from running the Service Tier on a box with SSD’s – but I wouldn’t expect a huge advantage if the service has sufficient RAM.

I will not be conducting any more tests – the primary reasons for this is, that I do not have the hardware anymore – meaning that I couldn’t do a re-run on the same hardware and compare all the different technologies – so any comparison would be unfair to one or the other.

Enjoy

Freddy Kristiansen
PM Architect
Microsoft Dynamics NAV

Microsoft Windows Vista Gadget – My “Stuff”

This is my second gadget. My first gadget was the Search gadget, which you can find here.

I won’t repeat any explanation as to what a gadget is, nor will I talk about Javascript – instead I will focus upon the things, which are new in this post:

  • Returning an XMLPort from Web Services to Javascript
  • Running a Web Service method asynchronous
  • Use the same gadget for invoking three different methods
  • Poor mans error handling in Javascript

What I want to do is to create a gadget that can show My Customers, My Vendors or My Items.

image

image

image

The flyout from the gadget should be additional information about the record in the gadget, like

image

Clicking on the customer number should open the Customer Task Page and it should be easy to add additional information to this window (the things that are interesting to quickly get access to for our users).

In the Settings dialog you should be able to control things like

image

I will not discuss My Activities in this post, although it would be cool to have a Vista Gadget with My Activities (must do that some other day).

Ready? – well here we go…

The GetMyStuff Codeunit

In NAV we need to expose something, through which we can get access to My Customers, My Vendors and My Items – and the easiest thing would of course be to just expose the 3 pages: 9150 (My Customers), 9151 (My Vendors) and 9152 (My Items).

The problem with this approach is, that we need to get a bookmark back (for opening Task Pages) from the Web Service – meaning that we either create another field in these pages – or we create another Web Service Method for returning the bookmark. I really don’t want to modify base objects if I can avoid it and adding another function just multiplies the number of times we invoke Web Service method by a factor, which I am not interested in doing either.

So I am going to go with a Codeunit, 3 XML ports – and the Codeunit basically just have these three functions.

GetMyCustomers(VAR result : XMLport “My Customers”)
mycustomer.SETRANGE(“User ID”,USERID);
result.SETTABLEVIEW(mycustomer);

GetMyItems(VAR result : XMLport “My Items”)
myitem.SETRANGE(“User ID”,USERID);
result.SETTABLEVIEW(myitem);

GetMyVendors(VAR result : XMLport “My Vendors”)
myvendor.SETRANGE(“User ID”,USERID);
result.SETTABLEVIEW(myvendor);

Now returning a XMLPort (which is a NAV construct) from a Web Service sounds weird – and of course we do NOT return the XML Port object to the Web Service Consumer. What we get is the output of the XMLPort – and if you send data into the XMLPort, it is going to run the XMLPort for input.

The XMLPort for My Customers looks like:

image

and if we take a look at the WSDL for a Codeunit which returns this XMLPort, the schema for the return type very much looks like the above definition

image

and the schema for the method references this as both input and output

image

In the XMLPort properties, I have set the following properties

image

and in the Table element properties MinOccurs is set to Zero. If MinOccurs is 1 (the default), the XMLPort will return an empty My Customer if the list is empty – we do not want that.

Codebehind on the XMLPort sets the three variables _Name, _Bookmark and _Phone.

MyCustomer – Export::OnAfterGetRecord()
rec.GET(“My Customer”.”Customer No.”);
ref.GETTABLE(rec);
_Name := rec.Name;
_Phone := rec.”Phone No.”;
_Bookmark := FORMAT(ref.RECORDID,0,10);

Where rec is type Record of Customer and ref is a RecordRef.

The My Vendors and My Items XMLPorts are created following the same pattern and my gadget code will be made in a way, so that additional fields in the XMLPort will be displayed in the flyout – so you can add whatever you want to this XMLPort.

The Gadget

Again – using Javascript and instead of listing the entire Gadget Sourcecode I will describe the flow.

The main HTML body contains two areas, in which I insert the content via DHTML. These areas are called CAPTION and CONTENT.

CAPTION is for holding the top part of the gadget, where is says My Customer, My Vendor etc. and CONTENT is for holding the list of records. The reason for the caption to be dynamic is that it changes based on the selected type of course.

The first code executed in the script (except for the variable declarations) is the onreadystatechanged event handler – and note that the construct

// Microsoft suggests using onreadystatechange instead of onLoad
document.onreadystatechange = function()
{
if(document.readyState==”complete”)
{
// Initialize Settings
System.Gadget.settingsUI = “settings.html”;
        System.Gadget.onSettingsClosed = settingsClosed;

        // Initialize flyout
System.Gadget.Flyout.file = “flyout.html”;
System.Gadget.Flyout.onShow = flyoutShowing;  

        // Set the Caption of the Gadget
setCaption();

        // Initialize timer – to refresh every x seconds
setTimeout( “refresh()”, 1000 );
}
}

creates the delegate and assigns it to the event handler in one go.

In the event handler we setup the Settings dialog to use settings.html and the flyout to use flyout.html. setCaption creates a HTML string for the Caption and sets that into the CAPTION DHTML area and lastly, we setup the refresh method to be called in 1 second. This is done in order for the Gadget to be added smoothly and without any delays from connecting to Web Services etc.

The refresh function starts by adding another call to refresh delayed (depending on the RefreshInterval) and call setContent. setConent is the function, which does the actual Web Service method invoke.

// Refresh Gadget
function refresh()
{
// Initialize timer – to refresh every
setTimeout( “refresh()”, GetRefreshInterval()*1000 );

    // Set the Content of the Gadget
setContent();
}

The three functions: GetRefreshInterval(), GetType() and GetBaseURL() are only for getting variables from the settings dialog. All functions will default the settings to the default value set in the top of the Javascript section, if they are not already defined. The reason for writing the values to the settings file here is that the settings.html becomes much simpler this way.

The settingsClosed event handler is setup in the onreadystatechanged above and is called when the Settings dialog is opened (clicking on the wrench next to the gadget). This event handler will update the Caption and call refresh (in order to force a refresh now – and setup a new timeout).

// Refresh Gadget on Settings Closed
function settingsClosed(event)
{
// User hits OK on the settings page.
if (event.closeAction == event.Action.commit)
{
// Caption might have changed based on settings
setCaption();

        // Refresh content, update refresh Interval
setTimeout(“refresh()”, 1000);
}
}

The setContent function starts out by setting up some global variables based on the type shown in the gadget

// Setup variables and invoke Web Service method for getting My records
function setContent()
{
RTCpage = ”;
Type = GetType();
if (Type == ‘My Vendors’)
{
// Settings for My Vendors

        RTCpage = ’26’;
WSfunction = ‘GetMyVendors’;
XMLPortResultNS = ‘urn:microsoft-dynamics-nav/xmlports/myvendors’;
XMLPortResultNode = ‘MyVendor’;
}
else if (Type == ‘My Items’)
{
// Settings for My Items

        RTCpage = ’30’;
WSfunction = ‘GetMyItems’;
XMLPortResultNS = ‘urn:microsoft-dynamics-nav/xmlports/myitems’;
XMLPortResultNode = ‘MyItem’;
}
else if (Type == ‘My Customers’)
{
// Settings for My Customers
RTCpage = ’21’;
WSfunction = ‘GetMyCustomers’;
XMLPortResultNS = ‘urn:microsoft-dynamics-nav/xmlports/mycustomers’;
XMLPortResultNode = ‘MyCustomer’;
}
else
{
RTCpage = ”;
}
// Invoke GetMyStuff Web Service
try
{
xmlhttp = new ActiveXObject(“Msxml2.XMLHTTP.4.0”);
xmlhttp.open(“POST”, GetBaseURL()+”Codeunit/GetMyStuff”, false, null, null);
xmlhttp.setRequestHeader(“Content-Type”, “text/xml; charset=utf-8”);
xmlhttp.setRequestHeader(“SOAPAction”, “GetMyStuff”);

        // Setup event handler when readystate changes
xmlhttp.onreadystatechange = function()
{
if ((xmlhttp.readyState == 4) && (xmlhttp.Status == 200))
{
xmldoc = xmlhttp.ResponseXML;
xmldoc.setProperty(‘SelectionLanguage’, ‘XPath’);
xmldoc.setProperty(‘SelectionNamespaces’, ‘xmlns:tns=”‘+XMLPortResultNS+'”‘);
myXML = xmldoc.selectNodes(‘//tns:’+XMLPortResultNode);
updateGadget(true);
}
else
{
updateGadget(false);
}
}
xmlhttp.Send(‘<?xml version=”1.0″ encoding=”utf-8″?><soap:Envelope xmlns:soap=”
http://schemas.xmlsoap.org/soap/envelope/&#8221;><soap:Body><‘+WSfunction+’ xmlns=”urn:microsoft-dynamics-schemas/codeunit/GetMyStuff”><result></result></’+WSfunction+’></soap:Body></soap:Envelope>’);
}
catch(e)
{
// Something went wrong – display: “Service not available”, indicating that there of course are no bugs in the above code:-)
updateGadget(false);
}
}

After setting up the variables, we initialize the xmlhttp and setup a delegate function for onreadystatechange on the xmlhttp (this gets invoked when the Web Service method is done). After this we invoke Send with a SOAP document conforming to the WSDL for the Web Service.

When the onreadystatechange event handler is executed we read the XML and update the content of the Gadget. If anything goes wrong we call the updateGadget function with false – indicating that it should change the content to an error message.

The UpdateGadget builds a HTML table and inserts this table into the CONTENT area of the gadget. For every row we add a call to showflyout if the user clicks the row in order to get additional information.

// Add a row to newHTML
newHTML += ‘<tr><td height=”18″ valign=”middle” background=”Images/gadgetmiddle.png”>’;
newHTML += ‘

‘;
newHTML += ‘ ‘+myXML[o].childNodes[1].text+’
‘;
newHTML += ‘

</td></tr>’;
o++;

The showFlyout function is pretty simple

// Show flyout with additional information
function showFlyout(no)
{
System.Gadget.Flyout.show = false;
flyoutNo = no;
System.Gadget.Flyout.show = true;
}

and after this method has been called, the next thing happening is that the flyoutShowing event handler is invoked – and in this event handler we can calculate the content of the flyout and set it in the flyout.

// Flyout Showing event handler
// Calculate content of flyout
function flyoutShowing()
{
flyoutHTML = ‘<table width=”100%” border=”0″ hspace=”0″ vspace=”0″ cellpadding=”0″ cellspacing=”0″>’;
flyoutHTML += ‘<tr><td height=”31″ align=”left” valign=”middle” background=”Images/topband.png” nowrap><p><strong><font color=”#FFFFFF” size=”3″ face=”Segoe UI”>&nbsp;’+myXML[flyoutNo].childNodes[1].text+'</font></strong></p></td></tr>’;
flyoutHTML += ‘<tr><td valign=”top”><table cellspacing=”5″>’;
for(i=3; i<myXML[flyoutNo].childNodes.length; i++)
{
flyoutHTML += ‘<tr><td>’+myXML[flyoutNo].childNodes[i].nodeName+'</td><td>’;
if (i==3)
{
flyoutHTML += ‘<a href=”dynamicsnav:////runpage?page=’+RTCpage+’&bookmark=’+myXML[flyoutNo].childNodes[2].text+’&mode=view”>’+myXML[flyoutNo].childNodes[i].text+'</a>’;
}
else
{
flyoutHTML += myXML[flyoutNo].childNodes[i].text;
}
flyoutHTML += ‘</td></tr>’;
}
flyoutHTML += ‘</table></td></tr></table>’;

    obj = System.Gadget.Flyout.document.getElementById(“CONTENT”);
obj.innerHTML = flyoutHTML;
}

Really mostly string manipulation in order to create a string that looks correct. I could probably have done the same with XSLT – but this seems pretty easy. As you can see the function enumerates the content of the childNodes to myXML – and adds everything to the flyout. This means that if you add some fields to the XMLPort, then these will be included in the flyout as well.

The flyout.html only contains an empty HTML document with a CONTENT area, which is set by the code above.

Settings.html

The settings.html is really simple, with a method for reading the settings and setting them onto the form

// Initialize settings Form
document.onreadystatechange = function()
{
if(document.readyState==”complete”)
{
// Read settings and set in form
URL.value = System.Gadget.Settings.read(“URL”);
RefreshInterval.value = System.Gadget.Settings.read(“RefreshInterval”);
Type.value = System.Gadget.Settings.read(“Type”);
}
}

A method which for setting the settings back into the settings file.

// Event handler for onSettingsClosing
System.Gadget.onSettingsClosing = function(event)
{
if (event.closeAction == event.Action.commit)
{
// Write new URL into settings
System.Gadget.Settings.writeString(“URL”, URL.value);
System.Gadget.Settings.writeString(“RefreshInterval”, RefreshInterval.value);
System.Gadget.Settings.writeString(“Type”, Type.value);

        // State that it is OK to close the settings form
event.cancel = false;
}
}

and the form itself in HTML

<table width=”100%” height=”100%”>
<tr><td>
Web Service Base URL:<br>
<input type=”textbox” id=”URL” maxlength=”250″>
</td></tr>
<tr><td>
My type:<br>
<select name=”Type” size=”1″>
<option value=”My Customers”>My Customers</option>
<option value=”My Vendors”>My Vendors</option>
<option value=”My Items”>My Items</option>
</select>
</td></tr>
<tr><td>
Refresh Interval:<br>
<select name=”RefreshInterval” size=”1″>
<option value=”10″>5 seconds</option>
<option value=”30″>30 seconds</option>
<option value=”60″>1 minute</option>
<option value=”300″>5 minutes</option>
</select>
</td></tr>
</table>

I will let the code speak for itself.

If you want to see the entire thing, feel free to download it from http://www.freddy.dk/GetMyStuff.zip. the Zip file both contains the Gadget (open that and install – or rename to .zip) and a .fob file with the NAV 2009 objects. You will need to expose the GetMyStuff Codeunit as a webservice with the same name.

Enjoy

Freddy Kristiansen
PM Architect
Microsoft Dynamics NAV