Your EnterpriseOne system may be creating hundreds of thousands of files on your JD Edwards Enterprise Server in a location that is not readily apparent and not cleaning them up...ever.
During a UBE run that involves a version that has Print Immediate turned on at the version level (introduced in TR 8.96) a 'temporary' print definition file (either PostScript or PCL depending on how you defined the associated printer) is created in the location specified in the netTemporaryDir= directive in the [JDENET] stanza in the Enterprise Server jde.ini.
The purpose of the file is to allow the Print Immediate printing of the generated PDF using the printer defined in P98616|W98616O - Work With Default Printers. The problem is that there is no process for cleanup of the file after it has been utilized. Deleting the job from the submitted jobs list removes the job record and the PDF but leaves the print definition file. The R9861101X/R9861102 UBE's that purge submitted jobs doesn't remove the print definition files either. What you end up with is a very large number of files that are not needed after the initial Print Immediate action.
Oracle's suggestion is to set KeepLogs=0 in the jde.ini but that has the effect of not keeping the UBE logs after the job completes, which is sometimes useful. It has been suggested to development that either the 'temporary' print definition file be deleted after the file is used (preferred) or the files be cleaned up as a part of deleting a job from the submitted jobs list and/or a part of the submitted jobs purge UBE process.
Important Update (9/8/2011): Changing KeepLogs to 0 (In Server Manager for the Enterprise Server in question - Logging/Error and Debug Logging/Keep UBE Logs = Remove Logs Once Printed) and UBESaveLogfile to 1 (In Server Manager for the Enterprise Server in question - Batch Processing/Save Empty Debug Log = Keep Empty Debug Log) in the Enterprise Server jde.ini will cause EnterpriseOne to retain the UBE logs while removing the print definition file. I'd still prefer that the software cleanup after itself instead of hacking the UBE logs code but this appears to solve the problem. Now if only Oracle will make this the default setting.
In the meantime, feel free to go to C:\JDE_HOME\targets\Enterprise_Server_enterpriseservername\temp on your Enterprise Server and remove those files.
Showing posts with label Administration. Show all posts
Showing posts with label Administration. Show all posts
Wednesday, September 7, 2011
Friday, September 2, 2011
EnterpriseOne JAS and JDEROOT Log Files on Enterprise Server
It is well known that JD Edwards EnterpriseOne splatters logs and temporary files all over the place. As the application has gotten more complex with added components such as Web servers, Java Application servers, Server Manager, etc. the sheer number of log files generated in wildly varying locations has notably increased and keeping track of and managing all these logs files is a chore.
Here's an example:
Go to your Enterprise Server and open the C:\JDEdwards\E900\DDP\system\bin32 directory, the one that is supposed to contain only the E1 binary files.
Scroll down to the 'J's" and find the hundreds or thousands of jas_nnnn_yyyymmdd.log files (ex: jas_4844_20101114.log). Delete them.
Now find the jderoot_nnn_yyyymmdd.log files (ex: jderoot_316_20101114.log) Delete them.
These files are vestigial from some long-forgotten process and are not needed nor useful. Prize goes to the one who finds the largest number of jas and jderoot log files on their Enterprise Server. My record is 14,634.
To ensure the log files no longer appear do the following:
Open the jdelog.properties file in the C:\JDEdwards\E900\DDP\system\classes directory on the Enterprise Server and comment the following sections:
Once this is done you may have to restart the E1 service to ensure the jdelog.properties file is read.
You could try to remove the log files in the jdelog.properties Logging section for the Enterprise Server in Server Manager but you'll get the error "The log configuration named 'E1LOG' is defined by the system and may not be removed." when you try to delete the E1LOG log file configuration.
Oracle has been made aware of the issue and they are going to remove this logging in future releases but until then it behooves you to delete the log files and prevent your system from creating them again.
So there's one set of log/temporary files gone. More to follow.
Subscribe to Jeff Stevenson's Technology Blog - Get an email when new posts appear
Here's an example:
Go to your Enterprise Server and open the C:\JDEdwards\E900\DDP\system\bin32 directory, the one that is supposed to contain only the E1 binary files.
Scroll down to the 'J's" and find the hundreds or thousands of jas_nnnn_yyyymmdd.log files (ex: jas_4844_20101114.log). Delete them.
Now find the jderoot_nnn_yyyymmdd.log files (ex: jderoot_316_20101114.log) Delete them.
These files are vestigial from some long-forgotten process and are not needed nor useful. Prize goes to the one who finds the largest number of jas and jderoot log files on their Enterprise Server. My record is 14,634.
To ensure the log files no longer appear do the following:
Open the jdelog.properties file in the C:\JDEdwards\E900\DDP\system\classes directory on the Enterprise Server and comment the following sections:
Figure 1 |
Once this is done you may have to restart the E1 service to ensure the jdelog.properties file is read.
You could try to remove the log files in the jdelog.properties Logging section for the Enterprise Server in Server Manager but you'll get the error "The log configuration named 'E1LOG' is defined by the system and may not be removed." when you try to delete the E1LOG log file configuration.
Oracle has been made aware of the issue and they are going to remove this logging in future releases but until then it behooves you to delete the log files and prevent your system from creating them again.
So there's one set of log/temporary files gone. More to follow.
Labels:
Administration,
EnterpriseOne,
JD Edwards,
Maintenance
Tuesday, August 17, 2010
Restarting the EnterpriseOne Queue Kernel
There is a way to restart just the UBE queue in EnterpriseOne. If you are faced with a situation where submitted jobs are not processing and the jobs are sitting in a waiting status you can get the queue restarted without bringing down the entire E1 system, allowing interactive users to continue their work.
In older (prior to 8.9) releases the queue was a separate Windows service called JDE Update 4 B733 Queue or something similar. This service could be stopped and started independently of what was then called JDE Update 4 B733 Network or the "Network" service. This made it easy to restart the queues without bothering the main service. In the 8.9 release however, the queue service was redesigned and turned into a kernel under the main E1 service, greatly complicating the targeted restart of the queues but not making it impossible. Here's how:
Go to the JDEdwards\E812\DDP\log directory and search for files containing the text QUEUE KERNEL. This will give you the queue kernel PID in the file name (i.e. jde_9188.log). You will (should) have only one queue kernel. Go to Windows Task Manager and View/Select Columns and add the PID column. You can kill that PID and the next time a job is submitted the Queue kernel will be restarted.
There are also ways to move jobs from one queue to another but that's a post for another day.
Subscribe to Jeff Stevenson's Technology Blog - Get an email when new posts appear
In older (prior to 8.9) releases the queue was a separate Windows service called JDE Update 4 B733 Queue or something similar. This service could be stopped and started independently of what was then called JDE Update 4 B733 Network or the "Network" service. This made it easy to restart the queues without bothering the main service. In the 8.9 release however, the queue service was redesigned and turned into a kernel under the main E1 service, greatly complicating the targeted restart of the queues but not making it impossible. Here's how:
Go to the JDEdwards\E812\DDP\log directory and search for files containing the text QUEUE KERNEL. This will give you the queue kernel PID in the file name (i.e. jde_9188.log). You will (should) have only one queue kernel. Go to Windows Task Manager and View/Select Columns and add the PID column. You can kill that PID and the next time a job is submitted the Queue kernel will be restarted.
There are also ways to move jobs from one queue to another but that's a post for another day.
Labels:
Administration,
CNC,
JD Edwards
Sunday, January 3, 2010
Identify SQL Table Backups
Earlier we discussed methods for executing quick SQL table backups and performing quick SQL table restores as a way to mitigate risk to data during certain operations. In this article we are going to discuss some low-effort housekeeping methods to keep us from forgetting about the table backups we created.
While it is not a huge deal that a few table backups are hanging out in your databases, having a large number of these backups can make things disorganized and large table backups can take up space unnecessarily. In general it is a good habit to keep your environment clean, but how much time and effort are we willing to spend doing so? At some point the benefit of numerous housekeeping chores is outweighed by the time and effort of not only performing the chores but keeping track of them.
Therein lies the beauty of the instructions in this article: once created, the process of identifying old table backups is entirely automated. It is true that one still does have to manually remove the tables designated - we do not want the machines to have too much autonomy, but by using Transact-SQL functions, SQL Server Agent and SQL Database Mail we can have the database server send us a list of old table backups on a regular basis.
Configuration
The first step is to create the stored procedure that will identify the old table backups. This user stored procedure will be referenced by code in a SQL Agent job and is the heart of the process.
The two most important parts of the above code are:
and
These are the sections that modify the SELECT statement with a WHERE clause that uses pattern matching and SQL wildcard characters to choose records that match the names of table backups created by our earlier script, which produces tables named something like JDE_PRODUCTION.PRODDTA.F0101_200906081807. Note that in the first example the underscore character is not being used as a SQL wildcard, I am actually looking for an underscore, which is why it is enclosed in brackets.
I have also included a pattern match for tables that begin with 'F' and contain the string 'bak', a typically used naming convention for EnterpriseOne table backups. You can add or remove additional clauses to tune the query for your particular environment but if all you are using to produce quick table backups is my Quick SQL Table Backups script, then the above will be sufficient.
Another important part of the code is the '> 14' portion of each WHERE clause section. This dictates that we want records returned only for tables that are older than two weeks. Feel free to modify this value to suit your needs.
The second configuration step is to create the notification delivery mechanism - a combination of a SQL Agent job and SQL Database Mail. If you have not already configured Database Mail, here is a very good article on how to do so.
Create a SQL Agent job named E1_Identify SQL Table Backups Older Than 2 Weeks (or whatever time period you specified in the WHERE clause section).
Schedule the job to run every month, preferably on the same day every month. I choose the second Monday of every month for mine but again, change to suit your needs.
Create a job step called Send Mail or something suitably witty or descriptive, it really doesn't matter much. Specify Transact-SQL as the type and use the following code as the command:
Be sure to change the red-shaded items above to values that make sense for your system. The 'profile name' variable should match a Database Mail profile that exists and that you wish to use to send the email.
Note that the 'query' variable references the user stored procedure we create in the first configuration step. Also note that there is a full blank line in the value for the 'body' variable. That exists for email formatting and the email is quite ugly without it.
Speaking of formatting, we can now look at what we will receive once a month in our email.
Results
The values specified in the sp_send_mail variables mean that we will receive an email with the subject 'Table Backups Older Than 2 Weeks', an initial body 'These table backups are older than 2 weeks and can be removed:' and the results of the query 'exec master.dbo.usp_TableBackupsOlderThan2Weeks'.
The stored procedure specified in the query, master.dbo.usp_TableBackupsOlderThan2Weeks, makes use of the undocumented stored procedure sp_MSforeachdb which means that we will get the list of tables matching the patterns specified in the WHERE clauses grouped by database. Databases with no tables matching the specified patterns will not be included in the result set.
The emailed results will look something like this:
As you can see, the email contains tables that match the patterns specified in the WHERE clauses and are older than 2 weeks. The tables are grouped by database and ordered by schema name.table name with the columns Created Date/Time and Modified Date/Time provided as additional information.
Summary
A certain amount of housekeeping is necessary to keep things organized in your SQL Server installations. However, keeping track of such tasks can be burdensome and we'd like to do whatever we can to make use of the features of SQL Server to automate items, essentially putting as much of the housekeeping on autopilot. Using Transact-SQL code, SQL Server Agent and Database Mail we are able to be notified on a periodic basis when there are table backups that can be removed.
Related postings:
Quick SQL Table Backup
http://jeffstevenson.karamazovgroup.com/2009/06/quick-sql-table-backup.html
Quick SQL Table Restore
http://jeffstevenson.karamazovgroup.com/2009/12/quick-sql-table-restore.html
Subscribe to Jeff Stevenson's Technology Blog - Get an email when new posts appear
While it is not a huge deal that a few table backups are hanging out in your databases, having a large number of these backups can make things disorganized and large table backups can take up space unnecessarily. In general it is a good habit to keep your environment clean, but how much time and effort are we willing to spend doing so? At some point the benefit of numerous housekeeping chores is outweighed by the time and effort of not only performing the chores but keeping track of them.
Therein lies the beauty of the instructions in this article: once created, the process of identifying old table backups is entirely automated. It is true that one still does have to manually remove the tables designated - we do not want the machines to have too much autonomy, but by using Transact-SQL functions, SQL Server Agent and SQL Database Mail we can have the database server send us a list of old table backups on a regular basis.
Configuration
The first step is to create the stored procedure that will identify the old table backups. This user stored procedure will be referenced by code in a SQL Agent job and is the heart of the process.
--SQL Script begin
USE MASTER
GO
if exists (select * from INFORMATION_SCHEMA.ROUTINES where SPECIFIC_NAME =
N'usp_TableBackupsOlderThan2Weeks')
DROP PROC usp_TableBackupsOlderThan2Weeks
GO
CREATE PROC usp_TableBackupsOlderThan2Weeks
as
exec sp_MSforeachdb
@command1='if (select count (*)
from [?].sys.objects
where name like ''F%[_]20%''
and type_desc not like ''FOREIGN_KEY_CONSTRAINT''
and DATEDIFF(day, modify_date, GETDATE()) > 14
or name like ''F%bak%''
and type_desc not like ''FOREIGN_KEY_CONSTRAINT''
and DATEDIFF(day, modify_date, GETDATE()) > 14) > 0
BEGIN
print ''?''
select cast (db_name (DB_ID(''?'')) + ''.'' + [?].sys.schemas.name + ''.'' + [?].sys.objects.name as char(55)) as ''Table Name'',
cast ([?].sys.objects.create_date as char (25)) as ''Created''
from [?].sys.objects
JOIN [?].sys.schemas on [?].sys.objects.schema_id = [?].sys.schemas.schema_id
where [?].sys.objects.name like ''F%[_]20%''
and [?].sys.objects.type_desc not like ''FOREIGN_KEY_CONSTRAINT''
and DATEDIFF(day, [?].sys.objects.modify_date, GETDATE()) > 14
or [?].sys.objects.name like ''F%bak%''
and [?].sys.objects.type_desc not like ''FOREIGN_KEY_CONSTRAINT''
and DATEDIFF(day, [?].sys.objects.modify_date, GETDATE()) > 14
order by ''Table Name''
Print ''
''
END'
--SQL Script end
The two most important parts of the above code are:
where name like ''F%[_]20%''
and
or name like ''F%bak%''
These are the sections that modify the SELECT statement with a WHERE clause that uses pattern matching and SQL wildcard characters to choose records that match the names of table backups created by our earlier script, which produces tables named something like JDE_PRODUCTION.PRODDTA.F0101_200906081807. Note that in the first example the underscore character is not being used as a SQL wildcard, I am actually looking for an underscore, which is why it is enclosed in brackets.
I have also included a pattern match for tables that begin with 'F' and contain the string 'bak', a typically used naming convention for EnterpriseOne table backups. You can add or remove additional clauses to tune the query for your particular environment but if all you are using to produce quick table backups is my Quick SQL Table Backups script, then the above will be sufficient.
Another important part of the code is the '> 14' portion of each WHERE clause section. This dictates that we want records returned only for tables that are older than two weeks. Feel free to modify this value to suit your needs.
The second configuration step is to create the notification delivery mechanism - a combination of a SQL Agent job and SQL Database Mail. If you have not already configured Database Mail, here is a very good article on how to do so.
Create a SQL Agent job named E1_Identify SQL Table Backups Older Than 2 Weeks (or whatever time period you specified in the WHERE clause section).
Schedule the job to run every month, preferably on the same day every month. I choose the second Monday of every month for mine but again, change to suit your needs.
Create a job step called Send Mail or something suitably witty or descriptive, it really doesn't matter much. Specify Transact-SQL as the type and use the following code as the command:
--SQL Script begin
EXEC databaservername.msdb.dbo.sp_send_dbmail
@profile_name = 'default',
@recipients = 'email.address@domain.com;
email.address2@domain.com
',
@subject = 'Table Backups Older Than 2 Weeks',
@query = 'exec master.dbo.usp_TableBackupsOlderThan2Weeks',
@body = 'These table backups are older than 2 weeks and can be removed:
'
--SQL Script end
Be sure to change the red-shaded items above to values that make sense for your system. The 'profile name' variable should match a Database Mail profile that exists and that you wish to use to send the email.
Note that the 'query' variable references the user stored procedure we create in the first configuration step. Also note that there is a full blank line in the value for the 'body' variable. That exists for email formatting and the email is quite ugly without it.
Speaking of formatting, we can now look at what we will receive once a month in our email.
Results
The values specified in the sp_send_mail variables mean that we will receive an email with the subject 'Table Backups Older Than 2 Weeks', an initial body 'These table backups are older than 2 weeks and can be removed:' and the results of the query 'exec master.dbo.usp_TableBackupsOlderThan2Weeks'.
The stored procedure specified in the query, master.dbo.usp_TableBackupsOlderThan2Weeks, makes use of the undocumented stored procedure sp_MSforeachdb which means that we will get the list of tables matching the patterns specified in the WHERE clauses grouped by database. Databases with no tables matching the specified patterns will not be included in the result set.
The emailed results will look something like this:
These table backups are older than 2 weeks and can be removed:
JDE812
Table Name Created Date/Time Modified Date/Time
------------------------------ ------------------------- -------------------------
dbo.F9006_bak Mar 25 2009 4:11PM Mar 25 2009 4:11PM
SVM812.F986101_bak2 Feb 6 2009 7:14PM Feb 6 2009 7:14PM
SY812.F986101_200907151136 Jul 15 2009 11:36AM Jul 15 2009 11:36AM
JDE_PRODUCTION
Table Name Created Date/Time Modified Date/Time
------------------------------ ------------------------- -------------------------
dbo.F98865_bak Dec 17 2008 7:15PM Dec 17 2008 7:15PM
PRODDTA.F0011_200910061837 Oct 6 2009 6:37PM Oct 6 2009 6:37PM
JDE_DV812
Table Name Created Date/Time Modified Date/Time
------------------------------ ------------------------- -------------------------
DV812.F983051_200912021511 Dec 2 2009 3:11PM Dec 2 2009 3:11PM
JDE_DEVELOPMENT
Table Name Created Date/Time Modified Date/Time
------------------------------ ------------------------- -------------------------
TESTDTA.F989998_200910230855 Oct 23 2009 8:55AM Oct 23 2009 8:55AM
TESTDTA.F989999_200910230855 Oct 23 2009 8:55AM Oct 23 2009 8:55AM
As you can see, the email contains tables that match the patterns specified in the WHERE clauses and are older than 2 weeks. The tables are grouped by database and ordered by schema name.table name with the columns Created Date/Time and Modified Date/Time provided as additional information.
Summary
A certain amount of housekeeping is necessary to keep things organized in your SQL Server installations. However, keeping track of such tasks can be burdensome and we'd like to do whatever we can to make use of the features of SQL Server to automate items, essentially putting as much of the housekeeping on autopilot. Using Transact-SQL code, SQL Server Agent and Database Mail we are able to be notified on a periodic basis when there are table backups that can be removed.
Related postings:
Quick SQL Table Backup
http://jeffstevenson.karamazovgroup.com/2009/06/quick-sql-table-backup.html
Quick SQL Table Restore
http://jeffstevenson.karamazovgroup.com/2009/12/quick-sql-table-restore.html
Labels:
Administration,
Backup,
Maintenance,
SQL Server
Monday, December 7, 2009
Quick SQL Table Restore
A while back I discussed a method to do quick SQL table backups. I usually create backups of tables prior to taking an action that has the potential to create the need to restore that table's data. It's just a good idea, is easier than taking a full backup and gives you a readily available source of the original data should something go wrong with the changes you make.
Not that it has ever happened to me...but occasionally the need may arise to restore this data to the table you just butchered.
If you used the script in quick SQL table backups you ended up with a table backup named something like PRODDTA.F0101_200912071424 with PRODDTA being the schema, F0101 the table name and 200912071424 representing the date and time as YYYYMMDDHHMM.
We can use INSERT INTO to restore the data from this backup table to the original table but it requires us to truncate the original table, deleting all existing records. The INSERT command appends records and any unique constraints in place on the original table will be observed, resulting in a "Violation of PRIMARY KEY constraint" error if you attempt to restore the data without clearing the original table.
Truncating any table is not a task lightly undertaken and the pucker factor can be pretty high. Relax though, we do have a copy of the data in a table backup right?
To clear the original table we use the TRUNCATE command in the form:
TRUNCATE TABLE databasename.schemaname.originaltablename
With the original table suitably cleared we can move forward with putting the backup table's data back into the original using this form:
INSERT INTO databasename.schemaname.originaltablename SELECT * from databasename.schemaname.backuptablename
That takes care of getting the data back into the table but we have one last step to complete the recovery - rebuilding indexes and updating statistics. While the data being restored to the original table is the exact same as what existed before, the storage engine needs to be re-taught what data is where by rebuilding the B-tree for the original table's indexes.
Since rebuilding indexes also accomplishes the goal of updating statistics we are going to execute the index rebuild only in this form:
ALTER INDEX ALL ON databasename.schemaname.originaltablename
REBUILD
That completes our quick SQL table restore. You're back up and running with minimal interruption.
Related postings:
Quick SQL Table Backup
http://jeffstevenson.karamazovgroup.com/2009/06/quick-sql-table-backup.html
Identify SQL Table Backups
http://jeffstevenson.karamazovgroup.com/2009/12/identify-sql-table-backups.html
Subscribe to Jeff Stevenson's Technology Blog - Get an email when new posts appear
Not that it has ever happened to me...but occasionally the need may arise to restore this data to the table you just butchered.
If you used the script in quick SQL table backups you ended up with a table backup named something like PRODDTA.F0101_200912071424 with PRODDTA being the schema, F0101 the table name and 200912071424 representing the date and time as YYYYMMDDHHMM.
We can use INSERT INTO to restore the data from this backup table to the original table but it requires us to truncate the original table, deleting all existing records. The INSERT command appends records and any unique constraints in place on the original table will be observed, resulting in a "Violation of PRIMARY KEY constraint" error if you attempt to restore the data without clearing the original table.
Truncating any table is not a task lightly undertaken and the pucker factor can be pretty high. Relax though, we do have a copy of the data in a table backup right?
To clear the original table we use the TRUNCATE command in the form:
TRUNCATE TABLE databasename.schemaname.originaltablename
--SQL Script begin
TRUNCATE TABLE JDE_PRODUCTION.PRODDTA.F0101
--SQL Script end
With the original table suitably cleared we can move forward with putting the backup table's data back into the original using this form:
INSERT INTO databasename.schemaname.originaltablename SELECT * from databasename.schemaname.backuptablename
--SQL Script begin
INSERT INTO JDE_PRODUCTION.PRODDTA.F0101 SELECT * from JDE_PRODUCTION.PRODDTA.F0101_200912071424
--SQL Script end
That takes care of getting the data back into the table but we have one last step to complete the recovery - rebuilding indexes and updating statistics. While the data being restored to the original table is the exact same as what existed before, the storage engine needs to be re-taught what data is where by rebuilding the B-tree for the original table's indexes.
Since rebuilding indexes also accomplishes the goal of updating statistics we are going to execute the index rebuild only in this form:
ALTER INDEX ALL ON databasename.schemaname.originaltablename
REBUILD
--SQL Script begin
ALTER INDEX ALL ON JDE_PRODUCTION.PRODDTA.F0101
REBUILD
--SQL Script end
That completes our quick SQL table restore. You're back up and running with minimal interruption.
Related postings:
Quick SQL Table Backup
http://jeffstevenson.karamazovgroup.com/2009/06/quick-sql-table-backup.html
Identify SQL Table Backups
http://jeffstevenson.karamazovgroup.com/2009/12/identify-sql-table-backups.html
Labels:
Administration,
Backup,
SQL Server
Thursday, December 3, 2009
Data Selection Security in EnterpriseOne
One of newest types of security for JD Edwards EnterpriseOne is Data Selection security. Using Data Selection security CNC administrators can secure users from modifying, adding, deleting, and viewing the data selection for batch applications or specific versions of batch applications.
Data Selection security was made available with Tools Release 8.98 Update 1 (8.98.1.0) and has a minimum application release level requirement of 8.12. The functionality also requires that Tools Baseline ESU JK17733 or newer be applied.
Data Selection is already disallowed for versions secured with an "old style" version security value of 1 where the Last Modified User is the only one who can change the version. Typically, the XJDE and ZJDE versions are delivered with this security value. However, for custom versions that are not secured in this manner the new Data Selection security can be used to gain fine-grained control over the actions that are to be allowed for Data Selection.
Some important points to keep in mind when considering Data Selection Security:
Enabling
Data Selection security is enabled when the application release is at 8.12 or higher, Tools Release 8.98.1.0 has been installed and Tools Baseline ESU JK17733 or newer has been applied. Once Data Selection security is enabled you will see a new Hyper Exit button in Work With User/Role Security (P00950).
If you do not see the button, chances are that you have not met one or more of the requirements mentioned above. The form used for Data Selection security is the same one formerly used solely for Processing Option Security.
Setup and Utilization
Prompt for Data Selection
When only the Prompt for Data Selection option is selected the user will still be able to select the "Data Selection" check box but will receive the following error:
Full Access Data Selection
The next most restrictive option is Full Access Data Selection. This option prevents a user from having a full set of the editing capabilities on the data selection screen.
When only the Full Access Data Selection option is selected the user will be able to modify values for existing data selection rows and add data selection rows with AND operator but not OR operator. The user will not be able to delete existing rows.
Modify Data Selection
When the Full Access Data Selection and Modify Data Selection options are selected the user will not be able to modify values for existing data selection rows but will be able to add data selection rows with AND operator but not OR operator. The user will not be able to delete existing rows.
Add Data Selection
When the Full Access Data Selection and Add Data Selection options are selected the user will be able to modify values for existing data selection rows but will not be able to add data selection rows. The user will not be able to delete existing rows.
Modify Data Selection plus Add Data Selection
When the Full Access Data Selection, Modify Data Selection and Add Data Selection options are selected the user will not be able to modify values for existing data selection rows and will not be able to add data selection rows. The user will not be able to delete existing rows. This is essentially a read-only configuration for Data Selection.
Options Summary
Full Access Data Selection
Full Access Data Selection + Modify Data Selection
Full Access Data Selection + Add Data Selection
Full Access Data Selection + Modify Data Selection + Add Data Selection
Summary
Data Selection security is another security type to be used by CNC administrators or consultants to lock down batch versions data selection during submission in the EnterpriseOne web client. It should be implemented as a part of a larger effort to secure batch processing and in such a manner as to maintain consistency with your organization's security practices and methods.
More information can be found in Oracle Document ID # 814174.1 JD Edwards EnterpriseOne Tools 8.98 Update 1 Batch Application Data Selection Security
Subscribe to Jeff Stevenson's Technology Blog - Get an email when new posts appear
Data Selection security was made available with Tools Release 8.98 Update 1 (8.98.1.0) and has a minimum application release level requirement of 8.12. The functionality also requires that Tools Baseline ESU JK17733 or newer be applied.
Data Selection is already disallowed for versions secured with an "old style" version security value of 1 where the Last Modified User is the only one who can change the version. Typically, the XJDE and ZJDE versions are delivered with this security value. However, for custom versions that are not secured in this manner the new Data Selection security can be used to gain fine-grained control over the actions that are to be allowed for Data Selection.
Some important points to keep in mind when considering Data Selection Security:
- Data Selection security applies to data selection during submission of a batch application or report.
- Data selection security is enforced only for end users submitting batch applications or reports from a web client.
Enabling
Data Selection security is enabled when the application release is at 8.12 or higher, Tools Release 8.98.1.0 has been installed and Tools Baseline ESU JK17733 or newer has been applied. Once Data Selection security is enabled you will see a new Hyper Exit button in Work With User/Role Security (P00950).
If you do not see the button, chances are that you have not met one or more of the requirements mentioned above. The form used for Data Selection security is the same one formerly used solely for Processing Option Security.
Setup and Utilization
There are four different Data Selection security options - Prompt for Data Selection, Full Access Data Selection, Modify Data Selection and Add Data Selection.
Prompt for Data Selection is the most restrictive, disallowing the user from even seeing the Data Selection.
Full Access Data Selection prevents a user from deleting existing Data Selection rows.
Modify Data Selection prevents expanding or changing existing criteria.
Add Data Selection prevents a user from adding new Data Selection criteria.
Prompt for Data Selection
When only the Prompt for Data Selection option is selected the user will still be able to select the "Data Selection" check box but will receive the following error:
Full Access Data Selection
The next most restrictive option is Full Access Data Selection. This option prevents a user from having a full set of the editing capabilities on the data selection screen.
When only the Full Access Data Selection option is selected the user will be able to modify values for existing data selection rows and add data selection rows with AND operator but not OR operator. The user will not be able to delete existing rows.
Enabling the Full Access Data Selection option allows the use of two more options that can be used to further restrict Data Selection - Modify Data Selection and Add Data Selection. The Full Access Data Selection, Modify Data Selection and Add Data Selection options can be used in any combination to provide the desired level of Data Selection security.
Modify Data Selection
When the Full Access Data Selection and Modify Data Selection options are selected the user will not be able to modify values for existing data selection rows but will be able to add data selection rows with AND operator but not OR operator. The user will not be able to delete existing rows.
Add Data Selection
When the Full Access Data Selection and Add Data Selection options are selected the user will be able to modify values for existing data selection rows but will not be able to add data selection rows. The user will not be able to delete existing rows.
Modify Data Selection plus Add Data Selection
When the Full Access Data Selection, Modify Data Selection and Add Data Selection options are selected the user will not be able to modify values for existing data selection rows and will not be able to add data selection rows. The user will not be able to delete existing rows. This is essentially a read-only configuration for Data Selection.
Options Summary
Prompt for Data Selection
- Cannot see or change data selection
Full Access Data Selection
- Can modify values for existing data selection rows
- Can add data selection rows with AND operator but not OR operator
- Cannot delete existing rows
Full Access Data Selection + Modify Data Selection
- Cannot modify values for existing data selection rows
- Can add data selection rows with AND operator but not OR operator
- Cannot delete existing rows
Full Access Data Selection + Add Data Selection
- Can modify values for existing data selection rows
- Cannot add data selection rows with AND operator but not OR operator
- Cannot delete existing rows
Full Access Data Selection + Modify Data Selection + Add Data Selection
- Cannot modify values for existing data selection rows
- Cannot add data selection rows with AND operator but not OR operator
- Cannot delete existing rows
- Read-only
Summary
Data Selection security is another security type to be used by CNC administrators or consultants to lock down batch versions data selection during submission in the EnterpriseOne web client. It should be implemented as a part of a larger effort to secure batch processing and in such a manner as to maintain consistency with your organization's security practices and methods.
More information can be found in Oracle Document ID # 814174.1 JD Edwards EnterpriseOne Tools 8.98 Update 1 Batch Application Data Selection Security
Labels:
Administration,
EnterpriseOne,
JD Edwards,
Security
Wednesday, September 16, 2009
EnterpriseOne User Specific Dynamic Logging with WebSphere Network Deployment
Denver's Server Manager product is the new(er) method for managing all aspects of EnterpriseOne servers - monitoring, configuration, tuning and logging, etc. CNC administrators can now modify settings, see what's happening on their system and do logging all in one interface . In particular the logging enhancements are worthy of mention. I will show that while Server Manager is a great tool, Oracle still has a ways to go for full integration with WebSphere in general and WebSphere Network Deployment specifically.
A new function for user specific, dynamic logging allows the administrator to:
- Log an individual JAS session without impacting other users.
- Begin and end this logging without having to stop and start the entire JAS instance.
The concept is great and I applaud the developers for this functionality. However, as with some other functions in Server Manager there are issues when one is utilizing WebSphere Network Deployment.
Network Deployment uses a centralized Deployment Manager node that controls a cell into which individual remote WebSphere nodes are federated. Any changes to configuration files are controlled by the Deployment Manager and must be propagated to the remote nodes. This appears to be where the problem starts with user specific dynamic logging.
Let's take a look at the steps needed to reproduce the situation and what we can do to work around the issue until Oracle fixes it:
Enable user specific dynamic logging by selecting Create New User Specific Log Configuration, entering the user name for the user for which you wish to enable logging.

Modify for Verbose logging and Threads and apply.

This action will write the logging information to the jdelog.properties file in the target's config directory but will not write it to the actual jdelog.properties file on the WebSphere node.


User specific logging is not yet actually occurring even though no error message in Server Manager will tell you so and, according to the documentation, the logging should begin immediately.
What's happening is that the jdelog.properties file on the remote node is not getting updated and therefore the node has no information on the user specific logging.
We must use the synchronize function in Server Manager to propagate the changes and make user specific logging work in a Network Deployment environment since, among other things, the Synchronize Node command will copy the configuration files (jdelog.properties in this case) to the remote node(s).
If you select the HTML managed instance for which you set up the logging you will be prompted to synchronize the configuration.

This is your first indication that the local and the remote configuration files are not in agreement.
Go ahead and synchronize the configuration but be warned that as of right now, selecting the Synchronize Configuration button will expire all users sessions for that JAS instance without warning from Server Manager. (Note: Apparently fixed in TR 9.1. Synchronize Configuration no longer restarts the instance unexpectedly but one must not manually restart the instance to see the changes.)
After synchronization completes you can see that the jdelog.properties files on the remote node now contains the individual logging settings.

Oracle has been made aware of the issue but I do not foresee a quick fix to the issue. In the meantime, you can manually synchronize using the button but remember that you will kick your users off if you do so. Oracle has been made aware of this issue was well. It's not quite as "dynamic" as intended but will allow you to do user specific logging.
Subscribe to Jeff Stevenson's Technology Blog - Get an email when new posts appear
A new function for user specific, dynamic logging allows the administrator to:
- Log an individual JAS session without impacting other users.
- Begin and end this logging without having to stop and start the entire JAS instance.
The concept is great and I applaud the developers for this functionality. However, as with some other functions in Server Manager there are issues when one is utilizing WebSphere Network Deployment.
Network Deployment uses a centralized Deployment Manager node that controls a cell into which individual remote WebSphere nodes are federated. Any changes to configuration files are controlled by the Deployment Manager and must be propagated to the remote nodes. This appears to be where the problem starts with user specific dynamic logging.
Let's take a look at the steps needed to reproduce the situation and what we can do to work around the issue until Oracle fixes it:
Enable user specific dynamic logging by selecting Create New User Specific Log Configuration, entering the user name for the user for which you wish to enable logging.
Modify for Verbose logging and Threads and apply.
This action will write the logging information to the jdelog.properties file in the target's config directory but will not write it to the actual jdelog.properties file on the WebSphere node.
User specific logging is not yet actually occurring even though no error message in Server Manager will tell you so and, according to the documentation, the logging should begin immediately.
What's happening is that the jdelog.properties file on the remote node is not getting updated and therefore the node has no information on the user specific logging.
We must use the synchronize function in Server Manager to propagate the changes and make user specific logging work in a Network Deployment environment since, among other things, the Synchronize Node command will copy the configuration files (jdelog.properties in this case) to the remote node(s).
If you select the HTML managed instance for which you set up the logging you will be prompted to synchronize the configuration.
This is your first indication that the local and the remote configuration files are not in agreement.
Go ahead and synchronize the configuration but be warned that as of right now, selecting the Synchronize Configuration button will expire all users sessions for that JAS instance without warning from Server Manager. (Note: Apparently fixed in TR 9.1. Synchronize Configuration no longer restarts the instance unexpectedly but one must not manually restart the instance to see the changes.)
After synchronization completes you can see that the jdelog.properties files on the remote node now contains the individual logging settings.
Oracle has been made aware of the issue but I do not foresee a quick fix to the issue. In the meantime, you can manually synchronize using the button but remember that you will kick your users off if you do so. Oracle has been made aware of this issue was well. It's not quite as "dynamic" as intended but will allow you to do user specific logging.
Labels:
Administration,
CNC,
EnterpriseOne,
JD Edwards,
Oracle,
Server Manager
Tuesday, March 24, 2009
Change Integrated Solutions (WebSphere) Console Timeout
In IBM's Integrated Solutions Console (formerly known as WebSphere Console), the administrative interface for WebSphere 6.1, the default console user inactivity timeout is 30 minutes. I happen to think this is a bit short, particularly since most anyone using the console is a highly trusted user, generally an IT administrator who is well-versed in computer and network security practices.
For this reason, and since I find it such a hassle to come back to the console after a short time and be told that "Your session has become invalid", I change the timeout to something I think is more reasonable, like 720 minutes.
Please note that I am referring to the timeout for the admin console, not session timeouts.
If you are using WebSphere Network Deployment (and you should be) edit this attribute in the following file on the Network Deployment machine:
C:\Program Files\IBM\WebSphere\AppServer\profiles\Dmgr01\config\cells\NetworkDeploymentservernameCell01\applications\isclite.ear\deployments\isclite\deployment.xml
Set the attribute invalidationTimeout to the desired value, in minutes, where the maximum value is -1 (do not time out)
Restart the WebSphere service on the Network Deployment machine.
If you are not using Network Deployment.....you should be, so go implement ND and follow the directions above.
If you are still on WebSphere 6 (and maybe 5):
Edit the ${WAS_HOME}/systemApps/adminconsole.ear/deployment.xml file to change the invalidationTimeout attribute value to the desired session timeout. The default is 30.
Restart the application service.
Subscribe to Jeff Stevenson's Technology Blog - Get an email when new posts appear
For this reason, and since I find it such a hassle to come back to the console after a short time and be told that "Your session has become invalid", I change the timeout to something I think is more reasonable, like 720 minutes.
Please note that I am referring to the timeout for the admin console, not session timeouts.
If you are using WebSphere Network Deployment (and you should be) edit this attribute in the following file on the Network Deployment machine:
C:\Program Files\IBM\WebSphere\AppServer\profiles\Dmgr01\config\cells\NetworkDeploymentservernameCell01\applications\isclite.ear\deployments\isclite\deployment.xml
Set the attribute invalidationTimeout to the desired value, in minutes, where the maximum value is -1 (do not time out)
Restart the WebSphere service on the Network Deployment machine.
If you are not using Network Deployment.....you should be, so go implement ND and follow the directions above.
If you are still on WebSphere 6 (and maybe 5):
Edit the ${WAS_HOME}/systemApps/adminconsole.ear/deployment.xml file to change the
Restart the application service.
Labels:
Administration,
Configuration,
WebSphere
Subscribe to:
Posts (Atom)