How is this perf testing thing actually working?

This post is #3 in a series of posts about performance testing.

Post #1 was all about setting up an instance of NAV on Azure and get perf tests up running.

Post #2 was all about scaling the number of users and running multi-tenancy.

But what actually happens when running perf tests? Continue reading

Perf testing with multiple users

This post is #2 in a series of posts about performance testing, please make sure you have read this post first.

In the previous post, you created a NAV Virtual Machine, installed Visual Studio, install Git, Cloned the NAV 2017-Sample repository from the NAVPERF organization, configured the settings and ran the test scenarios. Continue reading

Word Management

As with the release of Microsoft Dynamics NAV 2009, I was also deeply involved in the TAP for Microsoft Dynamics NAV 2009 SP1. My primary role in the TAP is to assist ISVs and partners in getting a customer live on the new version before we ship the product.

During this project we file a lot of bugs and the development team in Copenhagen are very responsive and we actually get a lot of bugs fixed – but… not all – it happens that a bug is closed with “By Design”, “Not Repro” or “Priority too low”.

As annoying as this might seem, I would be even more annoyed if the development team would take every single bug, fix it, run new test passes and punt the releases into the unknown. Some of these bugs then become challenges for me and the ISV / Partner to solve, and during this – it happens that I write some code and hand off to my contact.

Whenever I do that, two things are very clear

  1. The code is given as is, no warranty, no guarantee
  2. The code will be available on my blog as well, for other ISV’s and partners to see

and of course I send the code to the development team in Copenhagen, so that they can consider the fix for the next release.

Max. 64 fields when merging with Word

One of the bugs we ran into this time around was the fact that when doing merge with Microsoft Word in a 3T environment, word would only accept 64 merge fields. Now in the base application WordManagement (codeunit 5054) only uses 48 fields, but the ISV i was working with actually extended that to 100+ fields.

The bug is in Microsoft Word, when merging with file source named .HTM – it only accepts 64 fields, very annoying.

We also found that by changing the filename to .HTML, then Word actually could see all the fields and merge seemed to work great (with one little very annoying aberdabei) – the following dialog would pop up every time you open Word:

clip_image002

Trying to figure out how to get rid of the dialog, I found the right parameters to send to Word.OpenDataSource, so that the dialog would disappear – but… – then we are right back with the 64 fields limitation.

The reason for the 64 field limitation is, that Word loads the HTML as a Word Document and use that word document to merge with and in a document, you cannot have more than 64 columns in a table (that’s at least what they told me).

I even talked to PM’s in Word and got confirmed that this behavior was in O11, O12 and would not be fixed in O14 – so no rescue in the near future.

Looking at WordManagement

Knowing that the behavior was connected to the merge format, I decided to try and change that – why not go with a good old fashion .csv file instead and in my quest to learn AL code and application development, this seemed like a good little exercise.

So I started to look at WordManagement and immediately found a couple of things I didn’t like

MergeFileName := RBAutoMgt.ClientTempFileName(Text029,’.HTM’);
IF ISCLEAR(wrdMergefile) THEN
CREATE(wrdMergefile,FALSE,TRUE);
// Create the header of the merge file
CreateHeader(wrdMergefile,FALSE,MergeFileName);
<find the first record>
REPEAT
// Add Values to mergefile – one AddField for each field for each record
  wrdMergefile.AddField(<field value>);
  // Terminate the line
wrdMergefile.WriteLine;
UNTIL <No more records>
// Close the file
wrdMergefile.CloseFile;

now wrdMergefile is a COM component of type ‘Navision Attain ApplicationHandler’.MergeHandler and as you can see, it is created Client side, meaning that for every field in every record we make a roundtrip to the Client (and one extra roundtrip for every record to terminate the line) – now we might not have a lot of records nor a lot of fields, but I think we can do better (said from a guy who used to think about clock cycles when doing assembly instructions on z80 processors back in the start 80’s – WOW I am getting old:-))

One fix for the performance would be to create the file serverside and send it to the Client in one go – but that wouldn’t solve our original 64 field limitation issue. I could also create a new COM component, which was compatible with MergeHandler and would write a .csv instead – but that wouldn’t solve my second issue about wanting to learn some AL code.

Creating a .csv in AL code

I decided to go with a model, where I create a server side temporary file for each record, create a line in a BigText and write it to the file. After completing the MergeFile, it needs to be downloaded to the Client and deleted from the service tier.

The above code would change into something like

MergeFileName := CreateMergeFile(wrdMergefile);
wrdMergefile.CREATEOUTSTREAM(OutStream);
CreateHeader(OutStream,FALSE); // Header without data
<find the first record>
REPEAT
CLEAR(mrgLine);
// Add Values to mergefile – one AddField for each field for each record
AddField(mrgCount, mrgLine, <field value>);
  // Terminate the line
mrgLine.ADDTEXT(CRLF);
  mrgLine.WRITE(OutStream);
CLEAR(mrgLine);
UNTIL <No more records>
// Close the file
wrdMergeFile.Close();
MergeFileName := WordManagement.DownloadAndDeleteTempFile(MergeFileName);

As you can see – no COM components, all server side. A couple of helper functions are used here, but no rocket science and not too different from the code that was.

CreateMergeFile creates a server side temporary file.

CreateMergeFile(VAR wrdMergefile : File) MergeFileName : Text[260]
wrdMergefile.CREATETEMPFILE;
MergeFileName := wrdMergefile.NAME + ‘.csv’;
wrdMergefile.CLOSE;
wrdMergefile.TEXTMODE := TRUE;
wrdMergefile.WRITEMODE := TRUE;
wrdMergefile.CREATE(MergeFileName);

AddField adds a field to the BigText. Using AddString, which again uses DupQuotes to ensure that “ inside of the merge field are doubled.

AddField(VAR count : Integer;VAR mrgLine : BigText;value : Text[1024])
IF mrgLine.LENGTH = 0 THEN
BEGIN
count := 1;
END ELSE
BEGIN
count := count + 1;
mrgLine.ADDTEXT(‘,’);
END;
mrgLine.ADDTEXT(‘”‘);
AddString(mrgLine, value);
mrgLine.ADDTEXT(‘”‘);

AddString(VAR mrgLine : BigText;str : Text[1024])
IF STRLEN(str) > 512 THEN
BEGIN
mrgLine.ADDTEXT(DupQuotes(COPYSTR(str,1,512)));
str := DELSTR(str,1,512);
END;
mrgLine.ADDTEXT(DupQuotes(str));

DupQuotes(str : Text[512]) result : Text[1024]
result := ”;
REPEAT
i := STRPOS(str, ‘”‘);
IF i <> 0 THEN
BEGIN
result := result + COPYSTR(str,1,i) + ‘”‘;
str := DELSTR(str,1,i);
END;
UNTIL i = 0;
result := result + str;

and a small function to return CRLF (line termination for a merge line)

CRLF() result : Text[2]
result[1] := 13;
result[2] := 10;

When doing this I did run into some strange errors when writing both BigTexts and normal Text variables to a stream – that is the reason for building everything into a BigText and writing once pr. line.

and last, but not least – a function to Download a file to the Client Tier and delete it from the Service Tier:

DownloadAndDeleteTempFile(ServerFileName : Text[1024]) : Text[1024]
IF NOT ISSERVICETIER THEN
EXIT(ServerFileName);

FileName := RBAutoMgt.DownloadTempFile(ServerFileName);
FILE.ERASE(ServerFileName);
EXIT(FileName);

It doesn’t take much more than that… (beside of course integrating this new method in the various functions in WordManagement). The fix doesn’t require anything else than just replacing codeunit 5054 and the new WordManagement can be downloaded here.

Question is now, whether there are localization issues with this. I tried changing all kinds of things on my machine and didn’t run into any problems – but if anybody out there does run into problems with this method – please let me know so.

What about backwards compatibility

So what if you install this codeunit into a system, where some of these merge files already have been created – and are indeed stored as HTML in blob fields?

Well – for that case, I created a function that was able to convert them – called

ConvertContentFromHTML(VAR MergeContent : BigText) : Boolean

It isn’t pretty – but it seems to work.

Feedback is welcome

I realize that by posting this, I am entering a domain where I am the newbie and a lot of other people are experts. I do welcome feedback on ways to do coding, things I can do better or things I could have done differently.

 

Enjoy

Freddy Kristiansen
PM Architect
Microsoft Dynamics NAV

More SSD testing

If you haven’t read my post about Running Microsoft Dynamics NAV on SSD’s – you should do so first.

After having posted the initial results, I was contacted by other vendors of SSD’s wanting to see whether we could do some additional testing on other hardware. In the interest of the end user, I accepted and once more allocated a slot in the performance lab in Copenhagen.

The new drives to test were:

  • STEC 3½” Zeus IOPS SSD 146GB
  • STEC 2½” MACH8 IOPS SSD 100GB
  • Intel 2½” SSDSA2SH032G1GN 32GB (actually 32GB wasn’t enough for the testing so we took two of those and striped them)

All of these drives looks like standard HDD’s with a SATA interface. Installation is plug and play and no driver installation.

Disclaimer

Remember that the tests we have run here are scenario tests, designed to measure performance deltas on Microsoft Dynamics NAV to make sure that a certain build of NAV doesn’t suddenly get way slower than the previous version and gets shipped with poor performance.

Also again – I haven’t optimized the SQL server at all when doing these tests so you might not see the same performance gain if you switch your drives to SSD’s – or you might see more performance gain (if you know how to optimize for these things).

My testing is ONLY replacing a standard HDD (Seagate Barracuda 500GB, 7200 RPM SATA) with a SSD – and test the same scenarios.

The scenarios are being run for 180minutes each and the perf. testing starts after a 10 minutes warm up time. The tests I will be referring to here are all done simulating 50 users towards a service tier.

Final build of NAV 2009

Between the time of the prior tests and the new tests we released the final version of NAV – so the tests in this post will be based on the RTM version of NAV. Also we got some new Lab equipment – so in order to be honest to the FusionIO tests – we retook all of these tests as well.

In the new list of tests you will see 5 results: HDD, FusionIO, STEC5 (3½”), STEC3 (2½”), Intel

I will use the same tests as in the original post.

Here we go

clip_image002

As we can see if we compare this test to the test on the pre-release – this test is faster on the HDD than the prior test was on the FusionIO SSD.

This of course also means that the performance gain by using SSD’s in this test is smaller – but still a 21% performance enhancement by changing the drive to SSD isn’t bad at all.

clip_image002

Again 25% performance enhancement by changing to SSD’s and the difference between the different types of SSD’s is insignificant in comparison.

clip_image002[6]

Again 25-30% performance enhancement by changing to SSD’s and not a huge difference between the technologies.

Note, that as the numbers get lower – the measurement uncertainty can play a role in the results.

clip_image002[8]

This test is around 10 times faster than when we did the test on the pre-release and now the results are so fast that the uncertainty causes some results as being skyhigh. Analyzing the results actually reveals that there isn’t that much of a difference.

clip_image002[10]

Same picture again – significant performance enhancement changing to SSD’s – not a huge difference between the technologies.

Wrap-up

Test results where not as clear as the last time – primarily because the RTM version solved some of the perf. problems and due to new hardware in the Perf. lab – but still tests show 20-30% performance increase.

I still think SSD’s are here to stay and I do think that people can take advantage of the increased performance they will get simply by changing the drives in their SQL Server. I haven’t tested what performance enhancements you would get from running the Service Tier on a box with SSD’s – but I wouldn’t expect a huge advantage if the service has sufficient RAM.

I will not be conducting any more tests – the primary reasons for this is, that I do not have the hardware anymore – meaning that I couldn’t do a re-run on the same hardware and compare all the different technologies – so any comparison would be unfair to one or the other.

Enjoy

Freddy Kristiansen
PM Architect
Microsoft Dynamics NAV

Running Microsoft Dynamics NAV on SSD’s (Solid State Drives)

Solid State Drives are here. Laptops are sold with SSD’s (a bit expensive) and also the server market is seeing SSD’s coming out promising incredible performance from your storage.

But what does this mean for NAV?

I contacted FusionIO (www.fusionio.com), one of the major players in the high end SSD market and borrowed a drive for my server. The purpose of this was to test the performance of the drive in our NAV performance lab in Copenhagen. I also wanted to test whether the drive was reliable and easy to use / install.

The installation was a piece of cake: Open the server, Insert the card. Close the server, Install the driver – done!

Regarding reliability (after all I did get a beta version of the drivers) – I haven’t experienced one single problem with the server since installing the drive – so I guess that one gets a checkmark as well.

Executive summary

The solid state drive is dramatically faster in a number of the cold start scenarios – around 20-25% faster than the same test run on hard drives.

In other tests we see no a big difference which can be either because the SQL server makes extensive usage of caching or the test scenario is heavier on the CPU on the service tier.

In a few tests there is a small performance drop (typically in the magnitude of < 10ms for the test) – I expect this to be measurement inaccuracy.

Some reports, which are heavy on the database will also experience dramatic performance gain – again around 20-25%.

But the real difference is when we run 100 user tests – the picture is very clear, performance gain on a lot of the scenarios are 40%+

Buying a solid state drive for a company running 1-10 users seems overkill, it won’t hurt performance, but the more users you have on the system, the more performance gain you will get out of a drive like this.

Of course you will see the same picture of you have a smaller number of users but huge database or if you for other reasons have a large number of transactions.

Remember though that these solid state drives for servers is fairly new technology and priced at around $30/Gb (slightly more than a hard drive:-)) – prices always vary and these prices will probably be adjusted as we go along.

Initial testing

Before going to Copenhagen, I did a small test on the performance database (which contains a LOT of sales lines). I ran a SQL statement which calculated the SUM of all sales lines in the system.

On the harddrives this took approx. 45 seconds the first time (the second time, the cache would come into effect and the time was 1 second)

On the solid state drive – it took approx. 2 seconds the first time (and of course 1 second the second time).

But this of course doesn’t tell anything about NAV performance on these drives…

The server runs Windows 2003 server 64bit version with SQL Server 64bit – it has 8Gb of RAM, 3 * 500GB Hard drives (SATA – 7500rpm – one for system, one for data and one for SQL Server Log) and one 80GB FusionIO drive (which in the tests runs both Data and Logs for the databases on this drive).

The specs for the FusionIO drive are (from FusionIO’s marketing material):
– 700 MB/s read speed
– 600 MB/s write speed
– 87,500 IOPS (8K packets) transaction rate
– Sustain 3.2 GBps bandwidth with only four ioDrives
– 100,000 IOPS (4K packets) with a single ioDrive
– PCI-Express x4 Interface

BTW – these specifications are some of the fastest I have found on the market.

Intel launches a drive which is 250Mb/s read and 170Mb/s writes with 35000 IOPS and the small laptop 2.5” SSD’s from Imation specs at around 125Mb/s write and 80Mb/s reads with 12500 IOPS.

These drives will probably increase performance on demo laptops a lot – but are not suited for servers.

Texas Memory Systems have launched a series of SSD’s that matches the performance and as FusionIO – they are primarily focused on the server market – in fact if you look at the document about SQL Server performance on their drives (http://www.texmemsys.com/files/f000174.pdf) you will find a Danish AX customer (Pilgrim A/S) who is live on this technology and he states:

“Don’t be scared. The technology has proven itself superior in many circumstances. You just have to know when to apply it. For applications with smaller databases but heavy load, it’s a life saver”.

The purpose of this blog is not to point at any one particular provider of SSD’s – but I do want to mention that if you go for this – beware that performance specs on these things vary a lot – and from what I have seen, performance costs money.

Details

Note that I did absolutely nothing to improve the SQL performance on this machine – that is why we will run tests on this server both on drives and on solid state. The first rerun is the test run on the build in hard drives and the second rerun is on the solid state drive.

The Client and Service Tier computers are kept the same and only the location of the attached database it altered in order to know the difference in performance when switching to solid state.

Note also that these tests are NAV 2009 tests – I do think the picture for NAV Classic is similar when running multiple users though, since the multi user tests doesn’t include UI (that would be the Client tests) and really just measure the app and stack performance.

Details – Reporting scenarios

Of the report tests I will show two test results – a performance test doing Adjust Cost Item Entry and a Customer Statement. The first one hits the SQL Server performance a lot and shows very good performance on the 2nd rerun (the SSD) and the Customer Statement is more heavy on the Service Tier than on the SQL Server

clip_image002

This report is a batch job adjusting the Cost on Item entries – performance gain – around 25%

 

clip_image004

Customer statement report – the performance database doesn’t contain a lot of data, which affects the customer statement, the report with the given data isn’t hard on SQL.

Details – Client scenarios

Of the Client tests the major performance advantage comes when doing a cold start (this is where the Service Tier needs warm up – and this of course hits the SQL Server a lot). This shows us, that when running single user scenarios in a pre-heated (warm start) environment, we don’t hit the SQL server a lot (unless in some report or other special scenarios) – we probably knew this already.

clip_image006

This scenario is starting up the Client and opens an existing sales order – cold start – performance gain around 15%

The same scenario in a warm start shows no performance gain at all – probably because everything is cached.

clip_image008

Again Cold start scenario – 40% faster – the same scenario in warm start is only a fraction faster.

Details – multiple user tests

Now this is the area, where the solid state drive should be faster – this is where the SQL server gets hit more and where the caching on the SQL server cannot contain everything, and I think the perf tests do show, that this is the case.

I will let the number speak for themselves on the important performance scenarios:

clip_image010

40% faster

clip_image012

40% faster

clip_image014

40% faster

clip_image016

80% faster

clip_image018

50% faster

clip_image020

50% faster

I think all of these are showing dramatic performance gain – of 40% or more when running  with the solid state drive, and I think that this shows that the technology can do a lot for some NAV customers.

I also do think that AX will we similar or better results running Solid State especially with large number of users (which have been confirmed by the report from Texas Memory Systems).

I do think the SSD technology has arrived and when pricing gets right – they will become very popular on the market. I think we will see them take over the laptop market – but I also do think that these tests shows that some providers are ready for the server marked as well. I think the major obstacle right now is that people somehow trust their data on hard drives more than on a SSD – but I think that will change as we get to know the technology better.

A special thanks to the performance team in Copenhagen for helping out and a special thanks to FusionIO for lending me a IODrive on which I could perform the test.

Enjoy

Freddy Kristiansen
PM Architect
Microsoft Dynamics NAV