Thursday, October 13, 2011

SQL Server Most Utilized Tables

Ever wondered which tables in your databases are being used most heavily?  Below is a script for SQL Server 2005/2008 that will tell you exactly that - which tables are utilized most and what percentage of that use is reads and what percentage is writes.  Heavily utilized tables are prime candidates for SQL Server's Table Partitioning or moving to another filegroup (not necessarily recommended with EnterpriseOne) and besides, it's just gosh-darn fun knowing what's going on inside your databases.

Big Caveat - This script makes use of the sys.dm_db_index_usage_stats SQL Dynamic Management View.  The values returned by this DMV do not persist beyond server restarts.  This means the information it gathers is only valid since the last SQL Server restart or (less likely) database attach.  So, if you just re-started your database server you are not going to get good numbers. Also, this information is kept in cache and is subject to memory pressure flushing, not a likely scenario but possible.

The script will gather index information from which one can infer table access information.  It uses the sp_MSForEachDB stored procedure to run through all databases on the instance, places the information in a temporary table, sums the values for index reads and writes, does a bunch of fancy math, rolls it up to the table level and returns the results to show reads, writes, percent of each and the type of index read.


--SQL Script begin
IF OBJECT_ID('tempdb..#Temp') IS NOT NULL
DROP TABLE #Temp
GO

CREATE TABLE #Temp
(TableName NVARCHAR(255), UserSeeks DEC, UserScans DEC, UserUpdates DEC)
INSERT INTO #Temp
EXEC sp_MSForEachDB 'USE [?]; IF DB_ID(''?'') > 4
BEGIN
SELECT DB_NAME() + ''.'' + object_name(b.object_id), a.user_seeks, a.user_scans, a.user_updates 
FROM sys.dm_db_index_usage_stats a
RIGHT OUTER JOIN [?].sys.indexes b on a.object_id = b.object_id and a.database_id = DB_ID()
WHERE b.object_id > 100 AND a.user_seeks + a.user_scans + a.user_updates > 0
END'

SELECT TableName as 'Table Name', sum(UserSeeks + UserScans + UserUpdates) as 'Total Accesses',
sum(UserUpdates) as 'Total Writes',
CONVERT(DEC(25,2),(sum(UserUpdates)/sum(UserSeeks + UserScans + UserUpdates)*100)) as '% Accesses are Writes',
sum(UserSeeks + UserScans) as 'Total Reads', 
CONVERT(DEC(25,2),(sum(UserSeeks + UserScans)/sum(UserSeeks + UserScans + UserUpdates)*100)) as '% Accesses are Reads',
SUM(UserSeeks) as 'Read Seeks', 
CONVERT(DEC(25,2),(SUM(UserSeeks)/nullif(sum(UserSeeks + UserScans),0)*100)) as '% Reads are Index Seeks',
SUM(UserScans) as 'Read Scans',
CONVERT(DEC(25,2),(SUM(UserScans)/nullif(sum(UserSeeks + UserScans),0)*100)) as '% Reads are Index Scans'
FROM #Temp
GROUP by TableName
ORDER BY 'Total Accesses' DESC
DROP table #Temp
--SQL Script end


The returned results will look like this:

Table NameTotal AccessesTotal Writes% Accesses are WritesTotal Reads% Accesses are ReadsRead Seeks% Reads are Index SeeksRead Scans% Reads are Index Scans
JDE_PRODUCTION.F061161196887461158860681.33118100139398.671181001096100.002970.00
JDE_PRODUCTION.F06189346335449957480.1193363779699.89933637770100.00260.00
JDE_PRODUCTION.F480190269270446533331251.5543735939248.4541843376095.67189256324.33
JDE_PRODUCTION.F0911739173150494891256.7068968402593.3068928392599.944001000.06


From this we can determine what tables are being utilized heavily and in what manner.   The last four columns deal with methods of index read access and are a bit beyond the scope of this article but if you really want to discuss that just ask in the comments and I'll be glad to expand.


Subscribe to Jeff Stevenson's Technology Blog - Get an email when new posts appear

Wednesday, September 21, 2011

Auto Open PDF's in EnterpriseOne Web Client

There has always been an issue opening PDF files from the Work with Submitted Jobs list in the EnterpriseOne web client - the end user is prompted to determine what they wish to do with the file when, chance are very good that they want to open it.  The problem is related to how the PDF content is delivered to the browser by the E1 code and how the browser handles that content.

Full disclosure: I completely stole this issue from a JDEList thread and the solution from user ice_cube210 (Tonio Thomas).


Problem Presentation

The problem manifests as a prompt in the browser every time a user opens a PDF from Work with Submitted Jobs (WSJ) asking what they want to do with the file:

Internet Explorer 8



















Firefox 6

















This gets old really fast.  Users generally want to open the file.


Background

The root of the problem, as mentioned above, is how a PDF is delivered to the browser.  PDF's can be sent to a web browser as either a file/attachment or inline as defined by the "content-disposition" HTTP header.  When a web server sends a PDF to a browser the content-disposition header is set to either "attachment" or "inline" or possibly not set at all.  The value of the content-disposition header tells the browser how to handle the content as defined in the RFC's rfc2183-in-http and rfc2183.  If the header is content-disposition: attachment; the browser will enter the "Save as..." dialog.  Use of the content-disposition: inline; header or the absence of a content-disposition HTTP header will cause the browser to use default processing to handle the content.  From this point it is up to the browser to determine how to process the PDF.  For Internet Explorer 8 default processing means opening the PDF in the browser's embedded PDF viewer and for Firefox 6, a separate PDF viewer (although this behavior can be changed).


EnterpriseOne Behavior

By default, when a user selects "View PDF" in WSJ EnterpriseOne sends the PDF as a file, using the  content-disposition: attachment; header.  This causes the browser to enter the "Save as..." dialog as stipulated by rfc2183-in-http. The end user must then choose to either open or save the file.  Choosing Firefox's "Do this automatically for files like this from now on" seems to be forever broken so is of no help.

For the really geeky here's what the HTTP headers look like for the default E1 behavior:

HTTP/1.1 200 OK
Date: Wed, 21 Sep 2011 18:19:00 GMT
Server: IBM_HTTP_Server
Content-Disposition: attachment; filename =R0006P_XJDE0001_428652_PDF.pdf
Content-Length: 762280
Keep-Alive: timeout=10, max=99
Connection: Keep-Alive
Content-Type: application/pdf
Content-Language: en-US


Note the content-disposition header, with filename parameter.


Workaround/Solution

From Tools Release 8.98 on there has been an option to present PDF's in a manner that will automatically open in the user's browser.  The setting is the "UBEContentDisposition" parameter in the [OWWEB] section of the JAS.INI and can be easily changed in Server Manager in the Web Runtime configuration for the JAS server as described here.

At first I thought Oracle had used the content-disposition: inline; header but examination of the HTTP headers shows that once the setting is changed, the content-disposition header is simply removed.

HTTP/1.1 200 OK
Date: Wed, 21 Sep 2011 18:20:44 GMT
Server: IBM_HTTP_Server
Content-Length: 762280
Keep-Alive: timeout=10, max=88
Connection: Keep-Alive
Content-Type: application/pdf
Content-Language: en-US


I suppose that will work.  The RFC allows for no content-disposition header.   The absence of the header will cause the browser to use the default method of presentation and we get the desired behavior - automatically opening PDF's. I'd much rather see the content-disposition: inline; header and have Oracle also use the Disposition Parameter: 'Filename' to give us a real filename to use when saving the file from Acrobat Reader (see below).


Issues with Workaround

An issue with the fix may preclude some customers from using this workaround - when UBEContentDisposition is set to True (the default) and the PDF is sent as an attachment, the UBE name (R0006P_XJDE0001_429031_PDF.pdf) is sent to the browser/Acrobat Viewer and allows one to save the file with the original EnterpriseOne UBE name:

Original UBE Name






















When using the workaround and setting UBEContentDisposition to false, a generic PDF name (com.jdedwards.jas.pdf) is passed in:

Generic Name





















This causes a problem when one wishes to save the PDF using the original UBE name for reference.  You can always name the PDF whatever you like but it is darn convenient to have the report name passed in.  Whether the naming issue is important enough to keep you from changing how PDF's are handled in the browser is an internal organizational issue.  We should be able to pass in real UBE names using the content-disposition: inline; header with the 'Filename' parameter but the smart folks at Oracle probably know something I don't.  I've suggested it and we'll see.

This is a very handy setting to change.  Anything that makes end users' work easier and more efficient is worth doing.



Subscribe to Jeff Stevenson's Technology Blog - Get an email when new posts appear

Wednesday, September 7, 2011

EnterpriseOne Temporary Print Files on Enterprise Server

Your EnterpriseOne system may be creating hundreds of thousands of files on your JD Edwards Enterprise Server in a location that is not readily apparent and not cleaning them up...ever.

During a UBE run that involves a version that has Print Immediate turned on at the version level (introduced in TR 8.96) a 'temporary' print definition file (either PostScript or PCL depending on how you defined the associated printer) is created in the location specified in the netTemporaryDir= directive in the [JDENET] stanza in the Enterprise Server jde.ini.

The purpose of the file is to allow the Print Immediate printing of the generated PDF using the printer defined in P98616|W98616O - Work With Default Printers.  The problem is that there is no process for cleanup of the file after it has been utilized.  Deleting the job from the submitted jobs list removes the job record and the PDF but leaves the print definition file.  The R9861101X/R9861102 UBE's that purge submitted jobs doesn't remove the print definition files either.  What you end up with is a very large number of files that are not needed after the initial Print Immediate action.

Oracle's suggestion is to set KeepLogs=0 in the jde.ini but that has the effect of not keeping the UBE logs after the job completes, which is sometimes useful.  It has been suggested to development that either the 'temporary' print definition file be deleted after the file is used (preferred) or the files be cleaned up as a part of deleting a job from the submitted jobs list and/or a part of the submitted jobs purge UBE process.

Important Update (9/8/2011): Changing KeepLogs to 0 (In Server Manager for the Enterprise Server in question - Logging/Error and Debug Logging/Keep UBE Logs = Remove Logs Once Printed) and UBESaveLogfile to 1  (In Server Manager for the Enterprise Server in question - Batch Processing/Save Empty Debug Log = Keep Empty Debug Log) in the Enterprise Server jde.ini will cause EnterpriseOne to retain the UBE logs while removing the print definition file.  I'd still prefer that the software cleanup after itself instead of hacking the UBE logs code but this appears to solve the problem.  Now if only Oracle will make this the default setting.

In the meantime, feel free to go to C:\JDE_HOME\targets\Enterprise_Server_enterpriseservername\temp on your Enterprise Server and remove those files.
Subscribe to Jeff Stevenson's Technology Blog - Get an email when new posts appear

Friday, September 2, 2011

EnterpriseOne JAS and JDEROOT Log Files on Enterprise Server

It is well known that JD Edwards EnterpriseOne splatters logs and temporary files all over the place.  As the application has gotten more complex with added components such as Web servers, Java Application servers, Server Manager, etc. the sheer number of log files generated in wildly varying locations has notably increased and keeping track of and managing all these logs files is a chore.

Here's an example:

Go to your Enterprise Server and open the C:\JDEdwards\E900\DDP\system\bin32 directory, the one that is supposed to contain only the E1 binary files.

Scroll down to the 'J's" and find the hundreds or thousands of jas_nnnn_yyyymmdd.log files (ex: jas_4844_20101114.log).  Delete them.

Now find the jderoot_nnn_yyyymmdd.log files (ex: jderoot_316_20101114.log)  Delete them.

These files are vestigial from some long-forgotten process and are not needed nor useful.  Prize goes to the one who finds the largest number of jas and jderoot log files on their Enterprise Server.  My record is 14,634.

To ensure the log files no longer appear do the following:

Open the jdelog.properties file in the C:\JDEdwards\E900\DDP\system\classes directory on the Enterprise Server and comment the following sections:

Figure 1


























Once this is done you may have to restart the E1 service to ensure the jdelog.properties file is read.

You could try to remove the log files in the jdelog.properties Logging section for the Enterprise Server in Server Manager but you'll get the error "The log configuration named 'E1LOG' is defined by the system and may not be removed." when you try to delete the E1LOG log file configuration.

Oracle has been made aware of the issue and they are going to remove this logging in future releases but until then it behooves you to delete the log files and prevent your system from creating them again.

So there's one set of log/temporary files gone.  More to follow.


Subscribe to Jeff Stevenson's Technology Blog - Get an email when new posts appear

Monday, August 29, 2011

EnterpriseOne 9.0 SQL Server Collation Issues


Starting in EnterpriseOne 9.0 the platform pack delivers pre-built databases that are simply attached and are set to a collation of Latin1_General_CI_AS_WS when created at Oracle.  This collation is different than the default SQL Server collation and different than the old (pre-9.0) collations.  Because of this, you will usually find a mix of collations on an E1 install, particularly upgrades.  The collation for E1 databases will likely be different than the collation for tempdb and master.  This can cause problems in an EnterpriseOne system and could result in "Cannot resolve collation conflict" or "Cannot resolve the collation conflict" errors appearing in logs during upgrades or JOIN operations.

You can determine the collation of your system's databases with this code:

--SQL Script begin
SELECT name, collation_name FROM sys.databases
--SQL Script end


If you are on version 9.0 or above you will likely have different collations for your E1 databases and the SQL system databases.  It is possible that you may see different collations between E1 databases if they are upgraded databases.


Background

The reason for E1 databases being delivered with odd collations was a width sensitivity issue for Double-Byte languages.  The solution was to create the EnterpriseOne databases with a width-sensitive collation - Latin1_General_CI_AS_WS.  This solved the width sensitivity problem but  has caused a large amount of grief for customers since it does not match SQL Server's default collation of SQL_Latin1_General_CP1_CI_AS on US English Windows servers.  (Heck, Latin1_General_CI_AS_WS doesn't match any language's default collation, guaranteeing collation conflicts.)  Oracle's recommendation is to install SQL Server with a non-default collation that will match the incoming E1 databases which keep their collation when attached.  Installing SQL Server with a collation that matches the incoming E1 databases will cause databases created on that SQL Server (master, tempdb, etc.) to take the SQL Server collation that will then match the E1-delivered collation and eliminate collation conflicts with JOINS.

To determine your SQL Server collation:

--SQL Script begin
SELECT SERVERPROPERTY('Collation') AS 'Collation'
--SQL Script end



Possible Solutions

If one fails to install SQL Server with the non-default collation (which is highly likely) Oracle recommends changing the collation of the existing databases.Changing a database's collation is no simple matter however and should be well thought out and planned.  Oracle recommends using R98403E but there are several methods and scripts online that can help.

Here is a very informative page that includes links to scripts that can help you change database collations if you choose to do so: http://www.sqlserverclub.com/articles/understanding-sql-server-collation-sequences.aspx

Here is an MSDN blog telling you not to: http://blogs.msdn.com/b/qingsongyao/archive/2011/04/04/do-not-alter-database-collation-in-your-server.aspx


Workarounds

If you are seeing errors in jde logs indicating a collation conflict you may well have to change collation.  If you are having problems with your own query containing a JOIN statement that joins databases with a different collations, you may wish to write the query and specify the collation. Ex: JOIN JDE900.OL900.F9860 on (sys.objects.name = JDE900.OL900.F9860.SIOBNM COLLATE SQL_Latin1_General_CP1_CI_AS).

Here's an example where I utilize this method: http://jeffstevenson.karamazovgroup.com/2010/07/how-large-are-my-tables.html.


I find that the collation issue in E1 to be a general pain but apparently it is necessary to deliver databases that handle the width sensitivity issue.  I personally think the platform pack code could be modified to determine if the double-byte problem is present before splatting databases onto a server that create collation conflicts.  I welcome any feedback on this issue.


More Information

Here's Oracle's explanation why:  https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=HOWTO&id=1267442.1

Another informative Oracle article that also discusses what happens to upgrade databases' collation:
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=HOWTO&id=1271189.1

To get a list of all SQL Server collations (very useful):

--SQL Script begin
SELECT * FROM fn_helpcollations()
--SQL Script end


Subscribe to Jeff Stevenson's Technology Blog - Get an email when new posts appear

Thursday, July 28, 2011

LiteSpeed Object Level Recovery Performance Enhancement

I recently had the opportunity (if you'd call it that) to utilize LiteSpeed's Object Level Recovery functionality on a large (~500GB) database and was somewhat surprised at the poor performance.

An end user had run a report in the JD Edwards EnterpriseOne application with incorrect settings, corrupting a table to the point where we needed the data from prior to the report run. Object Level Recovery (OLR) seemed like just the thing.

After determining the date desired for the restore I opened the OLR wizard in the LiteSpeed console, selected the appropriate full and differential backups and clicked through to view the backup contents to select the desired table. Then I waited...and waited, for hours and hours. It took so long that I stopped the process after five hours, thinking it must be hung. I re-started the process and let it run overnight, eventually the task completed and I was able to restore the table but I was convinced that this was not a workable solution for the future.


Background

LiteSpeed does Object Level Recovery by building an OLR 'map' over the backup contents and allows one to display a list of objects and then select individual tables for recovery. The building of this map is what was taking so long as we are basically building an index over the backup files. Consultations with the vendor, Quest Software, indicated that they are aware of the performance issues with OLR and they suggested either applying a OLR-related HotFix or upgrading to version 6.5, which includes the fix. Another suggestion was to perform the OLR map build during the backup, although support indicated that there might be a performance hit during backups.


Setup

I upgraded LiteSpeed to version 6.5 and performed some rudimentary tests which showed that improvements had been made but nothing significant. OLR map building/contents view was still taking quite a long time, especially when combining a full and a differential backup. Following support's suggestion, I enabled the map building during the backup by going to the maintenance plan properties for both the full and differential backup plans and setting "Optimize the Object Level Recovery speed of the backup" under the "LiteSpeed" section.

Figure 1

























Testing

I saved the maintenance plan, tested backup times and was pleasantly surprised that there was no noticeable increase in time required to perform the backup. Now to test the OLR restore time.

I had expected an improvement but was stunned to find that LiteSpeed returned the object list in about 30 seconds! That is truly some kind of performance change, now Object Level Recovery is actually usable.


Summary

I would highly recommend that anyone using LiteSpeed for backups upgrade to version 6.5 and change their maintenance plans to optimize OLR. It doesn't seem to add any appreciable time during the backup and the gains during recovery are simply stunning.

Quest Software should make this the default setting.
Subscribe to Jeff Stevenson's Technology Blog - Get an email when new posts appear

Thursday, June 16, 2011

Windows Magic Performance Button

Who knew changing a single setting away from the default in Windows 2008 and later (as of Windows 2016) would result in an increase of 70% in performance? That's exactly what happened recently at a client experiencing JD Edwards EnterpriseOne UBE performance issues during upgrade testing. We were able to make UBE's that previously ran in 10 minutes run in less than 3 minutes simply by changing a setting on only the SQL Server.


Background

Windows 2008 includes a new ACPI (Advanced Configuration and Power Interface) setting called Processor Power Management (PPM) that allows the CPU to change its performance state to one that lowers power consumption. When looking at the big picture this is a good thing: Microsoft is doing its part by setting a default that helps the world consume less energy and limit pollution. My clients however, are generally not as concerned about world energy consumption as they are about the performance of their system.

The problem is that the default setting for PPM of 'Balanced' makes the performance state of the CPU 'Adaptive'. A CPU in this adaptive state will throttle core frequency back to save power. This comes at the cost of lower performance. When higher demands are placed on the CPU it is supposed to react quickly and increase its core frequency to address the increased load. This makes sense - when performance is not needed the CPU essentially rests and saves power, when high performance is needed the CPU switches to a higher core frequency and performs optimally but uses more electricity. Unfortunately, the CPU cannot react quickly enough under the loads we typically see in an ERP system's servers (Database, Application and Web) to suitably increase performance.


Changing the Setting

Update (6/30/2011): You should probably visit the Testing section below first to determine if the Processor Power Management setting is already being controlled by BIOS, making the change to the OS unnecessary. If PPM is already being controlled by the BIOS, go ahead and leave it that way. If testing shows that the BIOS isn't controlling, make the changes below and tell the BIOS to allow the OS to control PPM.


We need to change the power plan setting from the default Balanced to High Performance.

Open Control Panel | Hardware | Power Options. You may see the following:

Figure 1































Notice that the options are greyed out. Select 'Change settings that are currently unavailable' to enable the radio buttons.

Select 'High performance' and close Control Panel.

Figure 2































You still have one step remaining to fully enable the High Performance power plan. In most modern servers, the BIOS will have a setting that either gives the BIOS control of Processor Power Management or allows the OS to control PPM. We want to set the BIOS to allow Windows 2008 to manage PPM. The BIOS setting will be different for HP, Dell, etc. so consult your manual or just use your noggin. Then test.


Testing

To check whether your changes have taken effect is as simple as downloading the CPU-Z tool and checking the measured core speed against the known spec's of your CPU. What you are looking for is a measured core speed equal or greater than the CPU's potential speed.

Here's a system that has the High Performance setting properly enabled:

Figure 3


You can see that the core speed is greater than the OS-reported CPU speed. This is what we want.

Here is a system that does not have the Processor Power Management set to High Performance:

Figure 4


You'll notice that not only is the core speed lower than reported by the OS, CPU-Z also reports a much lower multiplier than expected. The multiplier for the tested CPU should be 23 but CPU-Z shows it to be only 12. You can find your processor's proper multiplier here: http://en.wikipedia.org/wiki/List_of_Intel_Xeon_microprocessors.  You may also be able to find your processor's multiplier on http://www.cpu-world.com/index.html or by simply Googling your processor's model number (ex: Xeon E5-2620) plus the word 'multiplier'.

This is a system not properly configured for High Performance PPM.


Conclusion

Significant performance gains can be realized by changing the Windows 2008 default setting for Processor Power Management. It is recommended that you do so.


Update (6/27/2011): Information on changing this setting for a large number of servers using Group Policy is here: http://greg.blogs.sqlsentry.net/2011/01/ensuring-maximum-cpu-performance-via.html. Note that you will still need to update each machine's BIOS to allow the OS to control PPM.

Update (8/18/2011): More information on power management for VMware is in the comments.


References

PPM in Windows Vista and Windows Server 2008
http://msdn.microsoft.com/en-us/windows/hardware/gg463252

Degraded overall performance on Windows Server 2008 R2 (Ya' think?)
http://support.microsoft.com/kb/2207548
Subscribe to Jeff Stevenson's Technology Blog - Get an email when new posts appear