In September 2016 BMC released version 7.9 of the Atrium Orchestration Platform, containing a number of enhancements. This post discusses some of the major improvements contained within version 7.9.
Support for Remedy Single Sign On (RSSO)
In the previous version of AO (7.8) a new authentication method using Atrium Single Sign On (ASSO) was introduced. This required a separate installation and configuration of ASSO in addition to the standard AO components. When signing in to AO Grid Manager via the web interface, control was passed to the ASSO web page where the AO credentials were entered. Following successful authentication, control was then passed back to the standard Grid Manager web pages. The use of ASSO in this release introduced added complexity to the AO environment, as discussed in a previous blog.
In this release of AO (7.9) BMC have included support for RSSO. During installation the user is able to choose between using an external RSSO instance, or to install an embedded RSSO instance for each AO component. The external RSSO instance would typically be part of an existing ITSM Remedy instance, which represents a common use case for AO. Logging in to AO no longer involves being directed to a separate authentication page. This is a very welcome enhancement to the SSO authentication mechanism introduced in the previous version.
RSSO must be at version 9.1.01.001 or higher.
- Version 7.6.03 can be upgraded to version 7.9 and is able to take advantage of RSSO, either embedded or external.
- Version 7.7/7.8 can be upgraded to version 7.9, but is not able to take advantage of RSSO. These systems will continue to use ASSO.
Those systems therefore that are currently running version 7.7/7.8 will miss out on the RSSO enhancement on upgrade to version 7.9, as these systems are forced to continue to use ASSO. Having said that, it is a fairly simple process to save existing AO modules, and to completely reinstall the AO environment with version 7.9.
Windows 10 support for Development Studio
Development Studio (not AO Platform) can now be installed on Windows 10 desktop.
Support for GitHub
The content and repository versioning tool GitHub is now supported in Development Studio. GitHub is not packaged within the product itself, therefore GitHub server and repository must be created separately.
The support for RSSO in this release is a welcome enhancement to the SSO authentication mechanism introduced in the previous release. Many AO use cases include an interface with ITSM Remedy which already has its own RSSO instance. AO is now able to authenticate through that existing RSSO instance. The embedded RSSO option allows for a simple solution where no external RSSO exists.
Please contact us to find out how to get more from Atrium Orchestrator
In July 2015 BMC released version 7.8 of the Orchestration Platform. As with the previous version (7.7.x) this version requires Atrium SSO be installed as a prerequisite.
In line with many other members of the AO community, I have recently moved to version 7.8 from the popular version 7.6.03 making this is my first experience of working with SSO. In a previous post “What’s new in AO 7.8” I introduced the main enhancements in this release. Having now completed numerous test installations, the goal of this post is to:
- Describe the installation of SSO and AO 7.8
- Show that the installation and configuration of Atrium SSO need not be a major concern for those new to the product.
Single Sign On
From AO version 7.7 onwards, Atrium Single Sign On (SSO) replaces AO’s Access Manager and is a pre requisite to installing AO. This represents a major departure from previous versions of AO and can add to the complexity of the installation and configuration. However, during tests I have discovered it need not be as problematic as it first appears.
Of course, in your environment, SSO may already be installed and in use as part of an existing ITSM system for example. In this case you will simply need to point to that SSO instance when installing the components of AO Version 7.8. However, remember that because SSO requires the use of a Fully Qualified Domain Name (FQDN) to integrate with different servers, the AO components must be installed by specifying the FQDN, rather than the IP or hostname as was possible in the past.
If however, you are installing SSO only as part of an AO installation and, like me, this is your first experience of SSO, you may benefit from the experiences I gained during these tests.
I tested clean installs of version 7.8 and also upgrades from versions 7.6.03 and 7.7.02, on Windows Server 2008 R2 running Java version 1.8.0_60. BMC’s advice is to install SSO on a separate machine, but in my test environment it was installed on the AO CDP server.
New clean install of AO 7.8
Firstly, we have to install SSO (Version 9.0). This is the version that is certified for use with AO 7.8.
Ensure your environment meets the minimum required specification.
Since most readers will be familiar with BMC product installations this post will not show screenshots from every stage of the installation.
I accepted all of the default options during the SSO installation. Ensure that the Hostname entry is in FQDN format. To achieve this on my test VM, it was necessary to modify the computername and DNS suffix, so that the full Computername was in the required FQDN format as shown below. It may also be necessary to specify port numbers as alternatives to the defaults, dependent on your environment.
Following successful installation of SSO, copy the URL displayed on the final screen as shown below and paste to a browser on your machine.
We are then presented with the SSO login page:-
Login using the userid amadmin, and the password set during the installation. We are then presented with the following page:-
Notice at this point, there is only one active session – the one we have just initiated by logging in as amadmin, and there are currently no agents in the Agent’s List, since no products have at this point registered with SSO.
SSO is designed to work with numerous authentication mechanisms including LDAP, Kerberos, CAC, SecurID, AR and Internal LDAP, but in this case it is being installed only for AO, therefore it is sufficient to leave the authentication mechanism as the default - Internal LDAP.
We are now ready to install the AO components, each of which will register as agents with SSO.
As with previous versions of AO, install the Repository first, (remember the old Access Manager component is no longer present), then install the CDP and any additional components such as HACDP, LAP, OCP etc.
During the installation of the Repository and CDP you will be required to enter the details of the SSO instance, to allow these AO components to register with SSO.
In my test environment I installed only the Repository and a single CDP.
As part of the installation, the userid aoadmin is automatically created in both AO and SSO. We will use this userid for initial access.
Following installation, the Repository and CDP components are visible in the Agent’s List in SSO as shown below:-
We are now ready to log on to AO for the first time. Log off from SSO, restart your browser then enter the url for the AO grid manager.
Remember that you will immediately be redirected to SSO for authentication, so after entering your AO url, the first page you will be presented with is the SSO login screen. Authenticate using the userid aoadmin, with the default password admin123.
After successful authentication we are presented with the familiar Grid Manager page:-
Permissions - Users and Groups
As described in the on-line BMC documentation, AO permission groups must be created in both AO and in SSO.
Following initial installation the aoadmin user and group is automatically created in both AO and SSO. The aoadmin user is hardcoded within AO.
In addition, the Default group is created in AO. In line with BMC advice, the aoadmin group in AO is hardcoded and should only be used to unlock the grid if all permissions are accidentally removed. BMC also advise the following permissions should be removed from the Default group: Development Studio, Grid Administration, and Grid Management.
Therefore, after installation, follow the initial steps below to set up a new AO administrator user and reduce permissions on the default group:-
- Log in to AO as aoadmin. From Administration – Grid Permissions, create a new Admin group by copying the existing AoAdmin group
- Log out of AO
- In SSO create the equivalent Admin Group. Create a new administrator user and assign it to the new Admin group
- Log in to AO using the new administrator userid. Confirm that this user has full administrative permissions within AO
- Remove Development Studio, Grid Administration, and Grid Management permissions from the Default group
We now have a new administrator user and no longer require the hardcoded aoadmin user unless permission recovery is required. We have also lowered permissions on the default group to read only (since all users are assigned to the default group).
Continue to add users and groups dependant on your requirements.
Upgrade from AO 7.6.03 to AO 7.8
Upgrading the AO environment from 7.6.03 or 7.7.n is a little more complex than a clean install. This is partly because we have the option to migrate the existing AO users and roles using a migration tool (available on the EPD).
The BMC approved method of upgrade is as follows:-
- Shutdown all AO services
- Install and run the migration tool (export)
- Install SSO version 9
- Upgrade Repository to 7.8
- Run the migration tool (import)
- Start the Repository
- Upgrade CDP to 7.8
- Run the migration tool (import)
- Start the CDP
During my tests the first run of the migration tool in export mode was successful, however it then failed with Java errors when run in import mode. To solve this I ran the migration tool from the supplied jar file using the JRE embedded in the AO server installation, rather than the supplied .bat file, as below:-
Instead of :-
migration-tool-7.8.00\runAuthTool.bat --atssoPassword pass:****** --import AuthorizationExport.xml
Use the following:-
Jvm\bin\java –jar migration-tool-7.8.00\AuthTool.jar --atssoPassword pass:****** --import AuthorizationExport.xml
Other than this, the upgrade process was fairly straightforward.
As we have demonstrated, getting up and running with AO 7.8 and SSO, is relatively straightforwards even for those that haven’t installed or used SSO before. The clean installation tests were more successful than the upgrade tests, therefore if you are upgrading take careful note of the existing AO users and roles, in case the migration tool fails to work correctly and you need to create the users and roles manually.
We hope this is a useful overview Please get in touch if you wish to talk over any aspect of your planned upgrade to Atrium Orchestrator 7.8 or to talk about any aspect of your current installation.
In July 2015 BMC released version 7.8 of the Atrium Orchestration Platform, containing a number of enhancements. A full list of these enhancements are available from BMC. This post discusses some of the major enhancements contained within version 7.8.
This release of AO includes a welcome enhancement to the logging options. In previous releases all workflow activity was written to the process.log files and all grid activity to the grid.log files. Many will have experienced the frustration of trawling through these logs searching for output from one particular workflow, only to find the information interlaced with output from all other active workflows. In this release it is possible to specify an individual log file for a specific workflow or adapter.
This enhancement will undoubtedly improve the debugging experience and help to expedite workflow and grid remediation issues. A custom log file's format, size, and number of back-up files are the same as those specified for processes.log in the logging configuration file, log4j.xml.
Configuring Custom Process Logs
Custom workflow (process) logs are configured within Dev Studio.
For the workflow in question open the workflow process properties window and complete the two log related items as shown below:
Custom Process Logging Tips
- During local testing from within Dev Studio the custom log will be written to the machine on which Dev Studio is running. Once uploaded and activated on the AO Grid, the custom log will be written to the same directory as the process and grid logs.
- Logging information will still also be written to the process.log file.
- The custom log setting is activated only if the process is a root job. However, any output from workflows that are called from the root job will be included in the custom logging.
- Each log entry is associated with the name of the peer that executed the process.
Configuring Custom Adapter Logs
Custom adapter logging is set up from Grid Manager when configuring an adapter on the grid, as shown in the example below from the web services adapter (20.15.01)
Custom Adaptor Logging Tips
- Not all adapters in the latest content (20.15.01) support custom logging. The BMC documentation refers the user to the adapter documentation for further details, however for the example above (web services adapter version 20.15.01) the settings were not mentioned in the documentation. Therefore at present the user will need to check for these settings when configuring the adapter on the grid.
- The JobID can be used to associate entries between the process.log and grid.log
RESTful API Support
Version 7.8 includes support for RESTful web services. The options for web services in AO are now:
- Legacy web services
- ORCA (SOAP based) web services
- RESTful web services
The improvements available with the new REST API web service include the following:
- Using JSON and JSONP (to avoid the need for proxies and avoid cross-domain issues) support
- Using token-based authentication, enabling the user to log into the API server, get an authentication token, and use it for subsequent API calls so that the user is not required to include credentials in every request
- Querying for details about all installed modules or their constituent workflows
- Executing single workflows synchronously or asynchronously
- Terminating or cancelling workflows
- Pausing or resuming workflows
- Requesting the operational status of workflows
Previous AO versions included a health dashboard, but in this release a new Value Dashboard has been added which provides Value reports for AO processes. These reports show the time or money saved by the process runs that occurred during the reporting period.
There are two types of Value reports available:
- Process Metric reports provide data from completed process runs and the process run value is configured during report creation.
- Business Metric reports provide data from process runs for processes that include business metrics.
AO 7.8 represents a solid incremental update to the product. The custom logging should bring a much-improved experience for developers whilst other enhancements and fixes establishes this as a stable version for the future.
We hope this is a useful overview of the new features in BMC Atrium Orchestration 7.8. Please get in touch if you wish to talk over any aspect of your planned upgrade to Atrium Orchestrator 7.8.
In a future post we'll be exploring the installation and configuration of AO 7.8 and SSO.
In a previous blog post we reported on some initial findings on the relative end user performance of ITSM v9.0 compared to ITSM v7.6.04 and ITSM v8.1 while under load. Since that report we have been working with our partners Scapa Technologies and the BMC performance testing team to further explore the performance characteristics of ITSM v9.0.
We are very pleased to report that BMC were very cooperative in this and we were able to work with them to provide some valuable insights which we hope will be of use to the wider community
For this test BMC and Scapa Technologies agreed to concentrate on ITSM v9.0 in its default configuration and to tune some system settings to better improve the performance. Using Scapa TPP, BMC performed the following test cases – these duplicated the tests in our initial report.
- Open Incident Console
- Refresh Console view using Refresh icon
- Change from the default view in the console to “View All” (drop down Show menu, choose All).
- Refresh Console using the refresh icon
- Open Create Incident form
- Populate the form, using lookups for Customer Name, Company name, etc.
- Add a unique text summary such as the incident ID.
- Assign the incident to self
- Change Status to “In Progress” and Save
- Using the Search form, search on the unique text set (summary string) in Step 4
- Open the searched for incident
- Update and close the incident
The number of active users was progressively ramped up to increase the load on the system. As in the earlier tests, a Scapa Control Sequence was used to ramp up the load automatically.
|Step||Duration||Console Users||Create Users||Total Users|
The output from BMC’s initial test run was very similar to the initial report produced by Alderstone and Scapa Technologies in May.
Using Scapa TPP to repeat the test runs and through the use of system monitoring tools BMC were able to identify some areas of the configuration that could be improved from the out-of-the-box settings.
These configuration changes were:
- The Database (MSSQL) was configured to have a Maximum Degree of Parallelism to 1, rather than no limit
- The JVM hosting the mid tier had its JVM minimum heap size increased from 1GB to 2GB.
- Mid Tier logging was reconfigured to Interval only.
- Preload on the Mid Tier was disabled after an initial preload.
- The maxThreads parameter on Tomcat was increased from 300 to 600.
- Java Min heap size changed from 512MB to 4GB.
- Java Max heap was changed from 6GB to 4GB.
- Java Garbage Collection was tweaked using the following Options:
Once these changes had been applied, the tests were executed again and response times markedly improved at the peak load point and compared favourably with ITSM 8.1 SP2.
This exercise has demonstrated that with some small configuration changes to your ITSM implementation you can make quite a difference to the user experience of ITSM 9.0 and get the best out of your current hardware setup. Repeated testing of the changes drives out real end user performance improvements.
We hope that this set of tests, combined with BMC’s report on some of the heavier server side use cases provided by BMC in their July report, will put to rest any fears about making the leap to ITSM v9.0.
We also believe that this exercise proves the value of performance testing and tuning in any environment.
Alderstone would like to thank BMC for their co-operation during this exercise, and to Scapa Technologies for providing the use of their software and expertise.
Scapa Technologies (http://www.scapatech.com) is an independent software company, providing best of breed testing and monitoring software solutions and services and ensuring the performance of mission-critical systems, such as BMC Software's Remedy ITSM, across verticals and across the world.
BMC Remedy ITSM v9.0 represents a significant milestone in the evolution of the platform.
BMC have completed a re-engineering of the core Action Request System software from a C based application to one written in Java. this new foundation will allow the platform to evolve at a much faster rate in the future.
At Alderstone, we have been working with Scapa Technologies to benchmark the new BMC Remedy ITSM v9 against the last two major versions of ITSM.
In order to ensure a like-for-like comparison we set up three identical environments in Amazon cloud, and performed fresh installations of BMC Remedy ITSM 7.6.04, ITSM 8.1 SP2 and ITSM 9.0 GA with the out-of-the-box settings.
Each of these systems has the Mid Tier, AR Server and database installed on the same windows platform, while this does not match the typical production configuration of BMC Remedy ITSM, it created consistent and comparable systems.
We then used the Scapa Test and Performance platform to perform some initial performance tests.
The great news is that our initial findings show that ITSM 9.0 has very similar performance characteristics to ITSM 7.6.04.
ITSM 8.1 SP2 showed slightly more stability at the very edge of capacity.
As can be seen both ITSM 7.6.04 and ITSM 9.0 begin to behave erratically and user response times spike when increasing the concurrent user load from 120 to 260, while the 8.1 environment remained constant. This would seem to indicate that this initial release of ITSM 9.0 has performance characteristics similar to ITSM 7.6.04.
Please note that these findings are based on a very limited set of use cases on a single-stack, out-of-the-box installations, and results may vary depending on usage and environmental factors.
Scapa Technologies (http://www.scapatech.com) is an independent software company, providing best of breed testing and monitoring software solutions and services and ensuring the performance of mission-critical systems, such as BMC Software's Remedy ITSM, across verticals and across the world.
We have received an initial response from BMC on these preliminary findings, and they have provided some useful insights.
Scapa and Alderstone will be working with the BMC performance teams over the next few weeks to pull together a more exhaustive and representative report based on production class environments using their recommended hardware sizing and configuration.
BMC have also published a performance report which covers a different range of use cases
Over a period of time the amount of data stored in your Remedy system will increase and affect the performance of the application and the user experience. A slow application guarantees unhappy and inefficient users. Worse still, over time, the accumulated data will cost more to manage and maintain.
Applying best practice database indexing and regular maintenance can help mitigate the impact on the user experience. However, reducing data volumes in the right areas is certain to improve overall performance. The practice of managing the volume and location of the data that supports a Remedy system is one of the pillars of a performant, cost-effective system.
A large data set means it costs more and takes longer to perform all database maintenance operations such as database backups, refreshes of development and test environments. Larger hard disks and tapes are required to store the database and its backups. Removing data from the on-line transactional database altogether keeps these maintenance overheads as low as possible.
This article introduces the topic of Remedy data archiving and explores the out-of-the-box Remedy archiving functionality.
What will Archiving Do?
When we talk of archiving data, we mean the ability to move data from an on-line location to a different location. While Archiving solutions differ as to where archived data is stored and how it is moved, this is the core feature of any solution.
The key benefits we look for archiving to bring are;
- Improved Application Performance
- Reduced System Administration Costs
The most successful solutions will also;
- Enforce Data Consistency
- Respect the Data Retention Policies of the Users
- Be Operationally Flexible
If you’re still reading this then you probably have a Remedy system that has a lot of data, and is starting to have some performance issues and for which you need to find an archiving solution.
We hope this primer will be useful, however if you’d like to discuss archiving data in more detail please contact email@example.com.
What Do We Archive?
Typically data is moved from the transactional tables where the most data is created and where reductions in data will have the most positive effect on the user experience. For BMC Remedy ITSM this means Incidents, Problems, Changes, Service Requests, Work Orders and Tasks (and all the data related to them), are all candidates for archiving.
The benefits of reduced data volumes must be balanced against the requirements of the business. Although a policy of archiving all Incidents that have been closed for more than 1 week would boost performance it would significantly affect the ability of the teams using the system to deliver effective services to the business.
Drawing up the data retention policy that is right for your BMC Remedy system will involve coordination with the various stakeholders and will include some decisions about the functionality that your archiving implementation will offer.
Requirements, Requirements, Requirements
IT Service Management is fast moving, and much of the data held in a BMC Remedy ITSM system is very time-sensitive. For example, the fact that today the head of purchasing has forgotten their Windows password and cannot run the month end reports, is a very valuable piece of information to the business today. However this fact will not be important three years from now.
By comparison, the fact that on average the Helpdesk were closing 20% more calls as First Time Fixes three years ago than they do today, is very important for the success of IT Service Management within the business.
Not all data is equal and the requirements for data retention will differ depending on the type of data we’re considering.
A well-defined data retention policy is critical for the success of the archiving solution. Here are a few questions that you should consider as part of your exploration of the right policies;
- Do you need to retain data for auditing purposes?
- Are there key business reports that require longer periods of data to be retained?
- Do you ever need to un-archive data?
- Are there other sources where the data can be found such as a reporting data warehouse?
The time invested into identifying the stakeholders, understanding exactly what data they need and why, will more than pay for itself in achieving the right solution for your business.
The wrong solution can be more expensive to maintain and less effective for the business than keeping the data in the transactional database. In other words, there are just a few ways your archiving solution can make things better but lots of ways it can make things worse!
Take nothing for granted!
You may be told authoritatively; that there is no way that data can be removed from the system as it simply must be maintained on-line for the auditors or for regulatory requirements. Take the simple step of asking the auditors or relevant enforcement bodies. You may find the restrictions to be less severe than believed. Off-line archiving is a far simpler and cheaper solution than on-line archiving.
In summary, approach archiving as you would any other major enhancement to your system; take time to understand requirements and to understand the repercussions of the decisions you make.
Out-of-the-box Remedy Archiving
BMC Remedy provides the ability to create an Archive Form for any Form in the application. For example you can create an Archive Form for the main Incident Form (HPD:Help Desk) and set up a rule to move data from the “on-line” form to the Archive Form. This feature allows the time of the archiving to be scheduled to minimise the performance impact. It is also possible to set this feature to delete data rather than just moving it to the Archive Form.
The diagram above illustrates the way in which data is moved between Remedy Forms on the same ARS Server and consequently data is moved between tables in the same underlying database.
Because Remedy OOTB Archiving only moves data out of the “on-line” tables into “archive” tables within the same database. If accompanied by database maintenance, this can provide performance improvements to the user experience as well as retaining the data in a location where, with bespoke Remedy workflow, data can be easily accessed by users.
Using Remedy OOTB archiving presents the following challenges;
|Archiving affects Remedy Performance||Remedy OOTB Archiving uses the AR System Server to check the rules, query the data and move the data between the Forms. This creates a work load on the AR Server which varies depending on the indexes in the database, total volumes of data held in the Remedy Forms, and volume of data being moved. Scheduling the time of an archive run is therefore critical to ensure that end-user performance experience is not adversely affected.|
|Remedy API is slow||Manipulating bulk data efficiently is not a feature of the Remedy AR System Server. Using the Remedy API to bulk transfer data which is held in the database is far slower than moving data directly using the database. The Remedy OOTB archiving uses the Remedy API.|
|Scheduling and rules are not flexible||Ideally when managing the archiving of the backlog of data which has no doubt built up in your system you need flexibility to define when and for how long the archiving process runs for. Remedy OOTB Archiving does not allow us to run just in the evening on core business days but all day at weekends. Schedule changes can be made, but result in a large performance hit as the ARS Server re-caches.|
|Data is held in the on-line transactional database||If data is never moved out of the on-line transactional database then it will never stop growing. Larger databases cost the business more to keep in good working order than a smaller databases. OOTB Remedy archiving does not provide one of the key benefits of Reduced System Administration Costs.|
|Changes have to be replicated||A Remedy Archive Form must always match original Remedy Form. This means that all changes to the application in the form of BMC patches or major upgrades will need to be manually replicated in the Archive Forms. There is a development cost overhead for this.|
|Data Relationships are not enforced||If some of the data which makes up an entity is archived and some of the data is not, this can lead to unexpected application behaviour, potentially leading to data corruption and increased support costs. Data consistency is critical. Remedy OOTB Archiving allows a search to be run against just one Form and the data it holds; it cannot consider in data held in other Forms. All BMC Remedy ITSM entities e.g. Incidents/Problems/Changes/Service Requests/Work Orders/Tasks are made up of data held in multiple Forms.|
For example, if we want to archive a particular Problem record we also need to archive the SLA Measurements, Work Entries, Tasks and Audit Logs which are associated with that one particular Problem record. Please find below an illustration of some of the relationships for the PBM:Problem Investigation Form.
Please note: This is for illustration purposes only, is a partial view and may vary on your ITSM application depending on local changes and application version
Bespoke Remedy application enhancements can make Remedy OOTB archiving aware of the relationships between Forms holding ITSM entity data. Unfortunately, this pervasive change modifies a lot of BMC Remedy ITSM Forms. Changes to the out-of-the-box Remedy ITSM Application obviously carry a cost in on-going maintenance and support.
With the challenges posed by the Remedy OOTB Archiving solution, companies are choosing to implement their own bespoke solution for archiving Remedy data that meets their data retention requirements.
Design & Implementation of a bespoke Archiving Solution
Having discussed the principles of archiving for the BMC ITSM and ARS solutions and the limitations of the out-of-the-box Remedy archiving. Our conclusion was that the BMC Remedy out-of-the-box archiving is unsuitable for complex Remedy applications such BMC Remedy ITSM. In addition, as there are currently no comprehensive third-party solutions to this technical challenge, enterprises that need to address archiving are often choosing to develop their own bespoke archiving solutions.
Since the first publication on the topic of archiving we have been spending a lot of time investigating and testing different approaches to Remedy archiving. So much time in fact, that this second in the series of blog posts grew into a lengthy white paper.
What follows is a high level summary of our findings with just one very simple case study.
While researching archiving solutions, we have constructed what we believe is a novel design for a low cost, sophisticated and highly effective approach for implementation of archiving for complex Remedy applications such as ITSM.
We are interested in working with companies or consultancies who may be interested in collaborating on implementing this design.
This second chapter is intended for architects and designers who are considering the development of a bespoke archiving solution. We will discuss the attributes of a bespoke archiving solution appropriate for BMC Remedy ITSM and bespoke ARS Applications.
There is no one-size-fits-all solution, the approach which is appropriate for your organisation depends on your business requirements and the depth of your pockets. We’ll aim to answer questions like;
- How do can I meet my business requirements for archiving?
- What is involved in developing a bespoke archiving solution?
- What’s the cheapest ITSM archiving solution?
Attributes of archiving solution
When designing an archiving solution it is useful to consider how the following high-level components will be handled;
|Data Storage||The ability to store data in a safe and secure way over the long term. The type of data storage will affect the speed and cost of data retrieval and are typically driven by your business requirement for Data Access.|
|Data Transfer||The ability to move data out of your main transactional tables is the keystone of any archiving solution. For maximum benefit, data should also be moved out of your main transactional database.|
|Data Integrity||The ability to move data in a way that respects the application data integrity. Typically we will want to archive an “entity” such as an ITSM Incident. Archiving an Incident means archiving multiple records from multiple tables at the same time. This requires a profound understanding of the Remedy application data structures and relationships.|
|Long Term Maintenance||Without careful consideration the costs of maintaining your archive solution can outweigh the cost benefits to your business. Once an archiving solution is implemented then any future changes to the Remedy application need to be considered for the impact to the archive solution. This includes bespoke new features you may add to your Remedy application as well as future BMC upgrades to the software or applications.|
|Data Access||The ability to retrieve and view archived data. Your requirements around data access have a large impact on the overall cost of your solution. For example, always-on instantaneous access to archived data is likely cost more to develop and maintain than a solution which requires a 5 day lead time before archive data can be made available.|
|Execution Control||Typically the process of extracting and archiving data from your Production system has a performance impact for end users. Therefore the ability to control the time and duration of the execution of archiving is critical for the operational success of the archiving solution. Execution control also includes the ability to manage your data retention policies i.e. the criteria for deciding when an entity is moved into your archive data store.|
Each of these attributes of an archiving solution is discussed in detail in the white paper accompanying this blog.
The following real-world solutions are described in detail in the white paper;
- Table Partitioning
- Low Cost Archiving - Backup & Delete
- Full On-line Remedy Archiving Solution
In this blog post we'll be looking at the simplest and lowest cost form of “true” archiving.
Low Cost Archiving - Backup & Delete
Using this approach to archiving, the entire on-line database is backed to a secure medium and then data is removed from the database using SQL.
The SQL must be sensitive to the relationships between data in the various Remedy tables as described in the previous section. Following bulk data deletion the database tables and indexes from which data has been removed should be performance optimised by the database administrators. These operations can be performed on a regular quarterly/bi-annual/annual basis (frequency driven by your data volumes) during a scheduled maintenance window. The database backup should be stored on a redundant, secure storage solution.
When there is a requirement to review the offline data, for example during a Sarbanes-Oxley audit, the database backup can be restored to a new database, and a Remedy ARS Server pointed at the database to be able to view the data. All form and workflow definitions that were valid for the data structures are also backed up so the entire application is made available for review.
This method reduces the overall database size as well as the data held in the main transactional tables. Therefore it has a positive impact on the user experience as well as the maintenance costs. This approach has the lowest ongoing costs and is not affected by future changes to the Remedy application.
The main disadvantage of this approach, and a potentially critical flaw depending on your requirements, is that it requires significant effort and lapsed time to be able to view the archived data. It does not permit casual access or the ability to report on older data. This may mean that you define your data retention policy to keep data in the on-line database for longer than otherwise due to user-fears of it becoming inaccessible.
It is possible to mitigate these disadvantages using either or both of the following two methods
Always-on separate Archive system
Set up a distinct archive database and restore the latest database backup into it each time you perform an export and delete operation. This database has a single AR Server and Mid Tier server connected and runs in read-only mode. Users wishing to access archived data may connect to the read-only archive Remedy system to get full functionality. The major drawback of this system is that it will only hold a snapshot of the data which was in your live solution at the time the database backup was taken. Therefore if you perform a backup-and-delete once a year then your live system may hold closed tickets which are a year old, your parallel archive system will hold closed tickets between 1 year and 2 years old. If your auditors or end users need to see data older than this then an older database backup would need to be restored into your archive system.
Reporting Data Warehouse
A data warehouse typically holds a subset of data from multiple applications and provides data cubes and cross-function reporting capabilities to multiple groups within the business.
A well-designed data warehouse can compliment many other archiving approaches by allowing users to view and report on key data which has been removed from your Remedy on-line database. A reporting data warehouse will, of course, have its own data retention policies that should be considered. However using a blended model for accessing data can result in a low cost and effective solution for archiving.
As a note of caution, it is frequently the case that tools such as BMC Analytics are being deployed based on a copy of the Remedy production database that is kept in synch with all changes in the production database using Oracle Data Guard/MS SQL Mirroring. This approach to reporting is a serious constraint on any archiving solution as any data that is removed from the main transactional forms would no longer be available to the BMC Analytic reports. Only a true reporting data warehouse that is populated by extracting data from the Remedy database, rather than being a mirror image copy of it, will have a positive impact on your archiving solution. Please contact us to discuss approaches to successful archiving if you are using BMC Analytics in this way.
We hope this brief overview of designing an archiving solution has been useful.Our comprehensive white paper report including detailed analysis of each of the areas of an archiving solution as well as multiple case studies can be requested from this on-line form.
Organisations which have deployed Enterprise wide IT applications, such as BMC ARS and the ITSM Suite, will often have the requirement to implement a Single Sign On solution to reduce the burden on IT support and to improve the end user experience.
We have to accept that SSO is complex, different conventions for user name formats between systems, different repositories of Application and User credential means finding an out-of-the-box SSO solution is a challenge. It is a difficult but necessary balancing act of infrastructure platform and application support, customisations and security concerns.
There are a number of different architectural options when discussing SSO, the selection you make will affect the implementation and support of your solution. This is further complicated because SSO has a range of meanings. In a BMC Remedy context SSO will typically include an authentication against any system which is not Remedy.
This article describes the SSO architecture for Windows Active directory web clients of the BMC ARS Mid-Tier. It also highlights feature support if a commercial solution is to be sought.
What is Single Sign-On (SSO)?
Single Sign-On (SSO) is a means of access control of multiple related, but independent software systems.The process authenticates the user for all the applications they have been given rights to and eliminates further prompts when starting those applications during a particular session.
- Reduce time spent re-entering passwords for the same identity
- Reduce password fatigue from different user name and password combinations
- Can support conventional authentication such as Windows credentials (i.e., username/password)
- Potential for ‘seamless’ or transparent logons where the client technology supports automated forwarding of ‘logged in user’ credentials
- Reduce IT costs due to lower number of IT help desk calls about passwords
The diagram below shows the system context at a high level
Sequence of Events
The sequence of events for an SSO enabled login are:
- User authenticates themselves into Windows on the Client.
- The user navigates to Mid-Tier in their browser
- A customized Mid-Tier login servlet extracts a user token from the HTTP request (if not present then it is requested as part of the authentication with the browser)
- The browser sends a user token (no user interaction required)
- The customized login servlet extracts the username and may perform username mapping if the Windows login name format does not exactly match the Remedy login name format
- The Mid-Tier forms an ARS login request and calls ARS
- ARS dispatches the login to the AREA plugin.
- The AREA SSO plug-in validates the Mid-Tier IP and a shared key
Below is a UML sequence diagram that shows the order of events in detail. The components (actors) are described in more detail in the section “BMC Remedy Enterprise SSO Architecture”
BMC Remedy Enterprise SSO Architecture
The logical architecture diagram below depicts all the components involved when implementing an ARS Mid-Tier based SSO solution typically seen in BMC Enterprise deployments. The items highlighted in yellow indicate components that would either be built in a bespoke solution or would be provided and configured when installing a commercial solution.
The diagram also shows the numerous configuration touch points also described in the sections below.
This would typically be IE v6 or later and it must treat the Mid-Tier as a trusted site in order for IE to seamlessly send the user token. The token exchange between the browser and web server is done using Microsoft NTLMv2.
In a typical enterprise environment the rollout of the change could be implemented via a Group policy change centrally and pushed globally and not individually on each client.
It is possible to configure other browsers (e.g. Firefox) on Windows to do similar, but not via automatic rollout using group policy settings; such a change needs to be performed individually for each installed browser.
In enterprise deployment a load balancer typically sits between all of the clients and the web tier. There are User token passing technologies that are stateless and introducing a load balancer has no significant impact to an SSO solution. However, NTLM is stateful (i.e. Capable of maintaining the status of a process or transaction)and requires that the load balancer supports:
- Client IP forwarding
- Session affinity
Failure to support these properties will result in needless re-authentication between the browser and the web tier. The impact on performance depends on the number of users and concurrent requests.
Additionally, if the production architecture has load balancers between the Mid-Tier and the ARS servers then it is critical that the Mid-Tier server IP addresses are preserved.
Although the Tomcat Java Application server is also an HTTP server it is recommended by Apache to front Tomcat by a dedicated HTTP server, such as Apache HTTPD. Without going into great detail this guidance is based on the dedicated HTTP server implementations being more robust to high volumes for serving static data and better handling of clients with sub-optimal session closedown behaviour.
Apache Tomcat is the J2EE WebApp container that hosts the Remedy Mid-Tier WebApp.
There is often no need to change the server configuration. However, the Mid-Tier web app will need modifications.
This is an optional component and is not required if the user token is easily extracted from the HTTP headers. When dealing with NTLMv2 though, this component is a J2EE Servlet filter that challenges the browser client for the NTLM token – the NTLM challenge is a multi-step (and stateful) process that is defined in a protocol specification by Microsoft. It is a non-trivial protocol that has been implemented by the likes of the commercial Jespa Java library discussed later.
In a J2EE environment NTLMv2 can be implemented by the Jespa Component. ( http://ioplex.com/) Implementing the browser NTLM challenge in a Filter simplifies downstream processing in the custom Mid-Tier authentication servlet.
Alternatively, if Microsoft IIS is used then it is possible to extract the username from the HTTP session and forward this on to Tomcat. As NTLMv2 token extraction is alien to a Java stack it removes the reliance on a commercial 3rd party library to perform the extraction.
This is the out-of-the-box Mid-Tier J2EE Servlet based entry point. By default it will dispatch to the standard login page servlet. To replace the standard Mid-Tier login process a custom login servlet can be specified in the Mid-Tier configuration file. [see the section named ' Configuring the User Name Alias’ in the BMC document “BMC Remedy Action Request System - Configuration Guide”]
This bespoke or 3rd party component overrides the default Mid-Tier Authentication mechanism via a BMC extensibility mechanism. The component implements a BMC Remedy API and is responsible for extracting the username from the user token and obtaining the Mid-TierShared Key. The Mid-Tier shared key is a common password shared between all Mid-tiers and the SSO plug-in that runs on the ARS server. It is important that this shared key is stored encrypted. A shared key is required because at no point is the user password transmitted. In fact, in order to enable SSO within BMC Remedy the user must have no password defined in the User form.
The username of the Active Directory user needs to correlate to the BMC Remedy login (or alias, see the BMC document “Integrating BMC® Remedy® Action Request System® with Single Sign-On (SSO) Authentication Systems and Other Client-Side Login Intercept Technologies”)
The actual Active Directory username can be obtained from the token in any one of 3 standard formats:
- Username - only username, for example.: joeuser
- Backslash - username + domain name separated by a symbol '\'
For example.: example\joeuser
- Principal - username + full domain name separated by a symbol '@'
For example.: firstname.lastname@example.org
It is at this point any custom transformations or other lookup can be performed, if required, in a bespoke implementation.
If a commercial SSO product is being used it is important to understand what transformations are supported at this step when considering how to map from Active Directory to BMC Remedy users. Also bear in mind that there is username aliasing support in ARS too [see Integrating BMC® Remedy® Action Request System® with Single Sign-On (SSO) Authentication Systems and Other Client-Side Login Intercept Technologies]
A bespoke or commercial solution will need to provide an AREA plugin and therefore the ARS server config file (arg.cfg or ar.conf) will need to be modified accordingly.
If the mapping from user token login names to Active Directory login names is required then it is possible to add an additional filed to the standard ARS User form to accomplish this. See "Configuring the User Name Alias" in the "ARS 7.5 Configuration guide.pdf".
This OOTB component is a plugin that allows Remedy to support more than one authentication mechanism. Note: ARS versions >= v7.6 may eventually support multiple AREA plugins.
AREA, AREASSO and the AREALDAP and will consult each one in turn when a user logs in.
This plug-in will be called by the ARS login mechanism any time an user attempt to authenticate, in either a bespoke or commercial solution this plug-in has 2 main tasks:
- validates the request has come from a mid-tier IP and
- validate the request has the correct shared key, by comparing what came from the CustAuth config to the shared key stored as part of the plugin configuration
If those 2 checks pass the user is authenticated and the plug-in simply responds with either a YES or NO.
It is important that this shared key is stored encrypted to ensure casual browsing of the filesystem does not expose this ‘global’ password.
NOTE: If a bespoke solution is to be built BMC provide an example C based SSO plug-in, search the support forums for: AREA_SSO_ALL_v206MT_v209AREA
Any commercial solution will need to supply you with either a native or a Java plug-in; so you should check platforms and version support if it’s a native plug-in.
However at the time of writing ARS only supports 1 AREA plug-in. This restriction is alleviated by the AREAHUB plug-in; as the area AREAHUB plug-in is native, you will not be able to use a Java based AREA SSO plug-in.
The out-of-the-box BMC LDAP plug-in used to validate usernames and passwords again LDAP repositories (e.g. Microsoft Active Directory). This may be required if, for example, only your MidTier users are using SSO and your User Tool users do not have the necessary client SSO DLL’s rolled out to the native clients.
This component represents Microsoft Active Directory. For a SSO solution based around NTLMv2 authentication and the Java JESPA library the Mid-tier component will need to be given its own AD ‘computer’ login as the Jespa implementation appears to Active Directory as a ‘machine’
As you can see from the myriad of configuration touch points and security concerns, implementing a bespoke SSO solution is a significant undertaking and a commercial option should be considered before going down that route. Hopefully this technical dive into BMC ARS Mid-Tier SSO can help shine a light into a dark area and will help you when you need to design and/or troubleshoot the chosen implementation.
BMC have recently released BMC Single Sign-On product. At the time of writing it does not support the transparent extraction of the Windows users’ credentials in the NTLM token therefore they will get prompted for a login. Support of such functionality is on the product roadmap.
Java System Solutions have a feature-rich and widely-used SSO product which you can find out more about at http://www.javasystemsolutions.com/jss/ssoplugin.
If you would like to discuss your specific requirements, or have issues with your current SSO solution please feel free to contact us at email@example.com
More and more of our customers are seeing a real need for integrations between Service Desks.
These may be within their organisations or, more recently, to Service Desks within external organisations. As companies migrate systems to the Cloud, outsource some services or use external suppliers, the need for integrated systems to reduce the need for manual data input has increased. However, even if the software for two endpoint systems is the same, the effort required for integration should not be underestimated.
What do you need to consider if you are about to start such a project?
Here are 10 things to get you started:
As with any project before starting it should be very clear to everyone involved what the project is aiming to achieve and why. This is even more important with a Service Desk integration project because of the different parties involved. As a minimum this will be multiple departments within the same company but is increasingly likely to span company boundaries and possibly include multiple geographies. If everyone is working towards the same clear objectives the project is far more likely to be a success.
2. Systems Thinking
This is often the most difficult thing for people to accept. When you integrate two systems you are building one larger system. If the solution is considered in these terms, then a number of the other items on this list can be addressed far more readily. These include:
- Roles and Ownership
- Open Communication
One system, one team, one solution - irrespective of departmental or company boundaries.
3. Roles and Ownership
As well as the objective being clear from the start the roles of the various actors relating to the solution need to be clearly defined. With departmental or company boundaries involved it is far too easy when a production error occurs to blame someone else. There will be existing owners of the endpoint systems but ownership of the integration needs to be defined, understood and agreed by everyone.
The technology supporting an integration works in the background and is typically invisible to the end user until it breaks. However once it does break the symptoms can become very visible very quickly. As the data most frequently transferred between Service Desks is Incident information by definition something else is already broken. Adequate monitoring is therefore important and this also relates to 'Roles and Ownership'. The monitoring not only needs to highlight something that is not working but also notify the correct people to resolve the issue.
A wide range of technologies can be used to transfer data between Service Desks. From tools designed specifically for the task, such as Enterprise Service Bus messaging systems, to tools that can be adapted to the task of systems integration, such as Run Book Automation or Orchestration tools. There is no silver bullet; the best solution depends on a number of factors, some of which are:
- What software is used in each endpoint system?
- How much and what type of data is to be transferred?
- What are the skill sets of the people who will support the solution?
- Is this part of a larger implementation?
- Is the use of Open Source an option within the organisation?
The costs and features of these options will vary. The best option within your organisation may not be the slickest message passing system. It may make more sense to reuse a technology you have already licensed and have knowledge of but which is less efficient.
Joining two systems involves collaboration between the teams owning and supporting those systems and as such the people involved need to be team players. Practice makes perfect and integration is no exception. The project is more likely to run smoothly if you have people on the team who have done this before. This includes those involved from the two endpoint systems, project management and integration partners.
Once the project has completed it will be handed over to support to manage the 'Business As Usual' activities. It is prudent to involve the support teams early in the development process, consult them on the monitoring and logging requirements, include them in the testing and provide clear processes and documentation for the finished system. This will help to ensure that the transition is smooth and uneventful. Involvement from an early stage will help promote ownership for the longer term.
After go live most issues occur either because of something that wasn't adequately tested or because of an issue with one of the endpoint systems. The purpose of logging is to enable the source of an issue to be identified quickly, the correct responsible party engaged and for them to be able to rectify the issue. Logging is fundamental to a process that runs in the background, as it is the only way to 'see' what has happened. Appropriate time needs to be allowed during implementation to ensure that logging is designed and implemented properly.
9. Open communication
It is very easy to adopt a 'them and us' mentality when working with other departments, external customers and suppliers. From the outset of any integration project, open communication, honest discussion and the sharing of ideas should be strongly promoted. Only with an environment of openness, where no one is afraid to speak up when they spot a potential issue, can a project be really successful.
10. Continuous improvement
This key element of ITIL is important here. A good integration solution will typically be invisible to the end user, as it will be providing seamless data transfer between the systems they use as part of the service provided. As such it will be easy to overlook in the process of Continual Service Improvement, However In order for this to remain the case it must be evolving, when the end systems change, as the traffic increases, after every outage. Only by continual review and improvement will the expected service levels be maintained.
We hope this is helpful when considering your next Service Desk to Service Desk integration, and if we can help in any way please don't hesitate to contact us at firstname.lastname@example.org
Cloud computing is receiving a lot of press recently. Anyone in the industry can’t help but notice the increasing marketing push by the infrastructure, platform and application vendors.
What exactly do we mean by Cloud Computing? The most concrete and widely accepted definition seems to be the National Institute of Standards and Technology (NIST)definitionof cloud computing, which we covered in part one of this series.
Implications and Challenges
So what does this all mean for Service Management? At first glance it could be seen a readily facilitating the BSM dream, whereby a business can define tight SLAs for technical services that are required to support their Business Services.
However, for an enterprise who may be considering the leap into using cloud based services there are a multitude of things to consider.
This brief article is intended to provide some food for thought in this regard.
Impact on a few of the key ITIL processes
Modern enterprise systems are complex. Identifying the cause of an incident when parts of the infrastructure/service are ‘in the cloud’ can be difficult, especially as these services may be only serving part of a process.
It is also worth thinking about how Incidents will be passed to and from the cloud provider. Clearly an automated solution would be preferable as no business wants to be in a telephone queue to the provider of your CRM system when you have a severity one and angry customers of your own to deal with. On the other side of the coin…you need to be informed that there is an issue before your customers find out.
Service Asset and Configuration Management
In the past few years enterprises have made great strides in understanding their infrastructure and service topology and modelling these in a CMDB. How does replacing your own infrastructure with cloud based solutions affect this as a trend? The intuitive and wrong answer is ‘phew…we don’t need to worry about THAT anymore’! This is unfortunately not the case. Companies still have business services to operate and will undoubtedly still have some infrastructure to manage. When your customers complain that function x is no longer working, you will still need to understand what the underpinning technical services are, whether they are internal or external or a combination of both.
There are several challenges that present themselves in this area. Most enterprises will have a robust internal IDM solution. Enterprises need to consider this will integrate with the chosen cloud service. Sharing Identity and Access Control amongst systems you own and maintain is one problem, sharing these across disparate third party managed systems is quite another.
In the modern workplace staff turnover can be quite high, It is important to consider how quickly can these cloud based services activate and deactivate users.
One of the key concerns that is raised in almost every conversation about cloud computing is data and system security. Enterprises need to consider any commercial and/or regulatory constraints. Quite apart from any regulatory implications, we have all seen the bad press that accompanies a personal data leak for any large company. It is clearly critical that in choosing which systems to put in the cloud, which deployment model to choose, and which provider to select that data and system security is paramount.
The Cloud Security Alliance have prepared some excellent guidelines on moving your data, functions, applications and processes into the cloud. The guidelines centre around the decision making process as to which of your systems are you able or willing to risk in the cloud.
The elasticity of cloud based services is one of the strong selling points, but how do we guarantee that availability?
The ‘on-demand’ , ‘under the hood’ nature of the automated computing, storage and network resources provided by Cloud computing can give the illusion that these resources are infinite and will be always available. While in most situations this may be true, anyone who has worked with a large financial system one can imagine the load put on a thousand ERP solutions at year end.
Clearly the above is just touching the surface, and represents just some of the areas for discussion amongst our consultants when assisting customers making the leap to the cloud. It should be an interesting journey, and one we hope we will share with many of you.
Of course, cutting through all of the above is Service Level Management.
Cloud Computing is the hot topic in IT at the moment, however there is still some confusion as to what this term actually means.
The most commonly accepted definition seems to be that of the National Institute of Standards and Technology (NIST).
In the coming weeks we will be exploring the impact of Cloud Computing on Service Management, but for this first part we have will simply introduce the NIST definition
NIST Definition of Cloud Computing
Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model promotes availability and is composed of five essential characteristics, three service models, and four deployment models.
On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service’s provider.
Broad network access. Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs).
Resource pooling. The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Examples of resources include storage, processing, memory, network bandwidth, and virtual machines.
Rapid elasticity. Capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time.
Measured Service. Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported providing transparency for both the provider and consumer of the utilized service.
The SPI Service Models.
Cloud Software as a Service (SaaS). The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based email). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.
Cloud Platform as a Service (PaaS). The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations.
Cloud Infrastructure as a Service (IaaS). The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls).
Private cloud. The cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on premise or off premise.
Community cloud. The cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on premise or off premise.
Public cloud. The cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services.
Hybrid cloud. The cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds).
In part two of this series we will explore some of the challenges this new application of technology poses in the world of service management.
The official BMC release notes for ARS 7.6.3 describe all of the new features and known issues of BMC Remedy AR System (ARS) 7.6.3. (This document can be downloaded from BMC's site with a valid support account.) This article will technically review the new features and look beneath the hood to identify the benefits, pitfalls and opportunities which 7.6.3 brings us.
ARS 1.1 was released in Q4 1992, 18 years later the application development toolkit underpins the industry leading BMC Remedy ITSM Suite. Remedy Corporation saw ARS as both a rapid development toolkit and an ideal technology for the IT Service Management market. Bespoke applications developed using ARS were as, if not more, common than the applications developed by Remedy Corp. Over time, the focus of Remedy Corp, then BMC, was for ARS to take second place to the comprehensive ITSM Suite applications developed using the ARS toolkit. The number of bespoke Remedy applications has been declining as the ITSM suite has grown more popular and ubiquitous. BMC have now positioned the ITSM Suite as a high-end platform for IT Service Management within the context of BMC BSM; consequently fewer companies are leveraging ARS as a bespoke application development platform.
The recent releases of ARS 7.5 and ARS 7.6 have elevated both the development platform and architecture of ARS to become truly modern and very feature-rich. Ironically, bespoke application development using Remedy is now finally capable of creating sophisticated and feature rich applications with excellent user experiences; just as the capabilities of the toolkit are least well promoted.
Typically releases for ARS follow a pattern where a release has been primarily focused on the back-end, development and infrastructure OR the user experience.
ARS 7.5 contained a large number of user interface enhancements, and because ARS 8.0 will be the first release which does not include the Remedy User Tool, we would expect ARS 8.0 to be a UI-focused release. Once BMC are able to focus purely on the web interface without being constrained by the development overhead of a dual platform user interface, we can expect substantial steps forward with ARS 8.0.
We would therefore expect ARS 7.6 to be an architecturally focused release, building on and refining the functionality brought in with ARS 7.5. We are not disappointed in this expectation; ARS 7.6 is primarily focused on improving performance of the infrastructure and applications. This release also has some thoughtful improvements to the web architecture which lay the foundation for ARS 8.0. Finally, ARS 7.6 sees some very welcome refinements of the new objects introduced in ARS 7.5 as well as some great new features.
There have been some improvements to performance and robustness of the install of the BMC Remedy applications, a preconfigured ITSM Suite installation package for new installs. Not 'core' ARS, but certainly helpful considering the number of ARS installations which are running BMC Remedy applications.
In our opinion this is a key driver for this architectural release. The footprint of ARS has grown increasingly larger as the target market has moved into the enterprise space. The vast array of new features in ARS 7.5 has introduced some performance issues for customers, however for consultants the system requirements of the new ARS 7.5 and 7.6 environments are challenging.
Previously it was possible to run a full ARS and ITSM server installation within a VM on a laptop and even to run multiple VMs in parallel. Our recent build of a Windows VM running Oracle, ARS 7.6, and the full ITSM 7 suite requires far more memory and system resources than ever before. A minimum of 1 GB RAM is required to enable the VM to run at all, around 2GB is needed to be able to develop workflow, around 3-4GB is required for a performance good enough for customer demonstrations. We hope to see more innovation from BMC to allow the performance of the platform and applications to support Laptop virtualisation.
Improved Mid Tier Caching
There are several significant improvements to the way Mid Tier handles its cache.
As always, changes to the workflow of a Remedy application need to be carefully planned due to the recaching of both ARS and Mid Tier server caches. Recaching has been an issue for a while and BMC are targeting this persistent issue by improving the way Mid Tier recaches.
Previously a refresh to the Mid Tier cache would purge the cache and cause it to be rebuilt. For a system running a full ITSM suite this could take a considerable time and directly impacted users in long downtimes, particularly with installations with multiple Mid Tier servers.
In ARS 7.6.3, the Mid Tier cache now updates only those workflow objects which have been changed - rather than rebuilding the entire cache when a single Active Link is changed. This 'smart' recaching only works for ARS Servers which are in production mode i.e. Development cache is turned off.
BMC are leveraging the open-source Java caching solution Ehcache to handle the Mid Tier's cache of ARS workflow objects. This tool has enabled caching to be persistent, much like Remedy Workflow is cached for the Remedy User Tool, Ehcache allows the Mid Tier cache to be written to the file system and reused after a Mid Tier server restarts. This functionality has to be explicitly enabled for the Mid Tier. Cache persistence promises to reduce the start up time for Mid Tier servers, particularly relevant in enterprise environments. Ehcache provides many configuration options including the ability to change the weight given to different types of objects. Finding the right combination of settings to optimise performance will be challenging and time-consuming for most Remedy support teams.
The long-hoped-for simplification of the pre-fetch functionality in ARS Mid Tier is finally here! It was possible to create an XML document listing all Forms which should be loaded into the cache when a Mid Tier Server starts up was both. Creating this XML document was difficult and exceptionally timeconsuming, as the process was entirely manual. BMC have implemented some excellent improvements in this area.
- Forms with active links and menus are preloaded into the system’s memory. Mid Tier makes the assumption that if a Form has Active Links or Menus on its fields, it is probably a user interface and will be accessed by users. As we often see menus on Join Forms and backend Forms, it remains to be seen how effective that assumption is.
- For legacy purposes, if a prefetchConfig.xml file exists then all of the forms and views specified in that file are preloaded.
- Views are preloaded according to usage statistics gathered by the mid tier server. Its not clear whether these usage statistics also capture the group permissions of the users accessing Forms, as this affects the workflow which needs to be cached.
This is a great step and we hope to see more automated, self-improving performance enhancements in ARS 8.0 which will be entirely web-based.
Mid Tier Performance Monitoring
It is now possible to monitor real-time Mid-Tier performance using a JMX console such as JConsole (http://java.sun.com/developer/technicalArticles/J2SE/jconsole.html). We'll be looking at this functionality in more detail to see how this new feature can best be leveraged. This enhancement reinforces BMC's use of Java and standard web technologies to support its offering.
Mid Tier Network Performance
Once of the main gripes network administrators have with Remedy is its "chattiness". There are a great many interactions between client and server for any user operation. BMC have focused on this area and have reduced the number of roundtrips between the browser client and server. In addition work has been done to make page loading more efficient. In line with the phasing out of the Remedy User Tool, no development effort has been spent specifically optimising the network interactions of the Remedy User Tool.
In support of the efforts to reduce the network chattiness of the Mid Tier Client a new set of features which support management of Table data has been added in this version. To summarise this enhancement, this change allows the client to manage table data sets locally, without needing to apply changes to the database. this represents a significant performance improvement over previous versions, where all changes to data being displayed in tables needed to be committed to the database, and the table data refreshed before the changed data could be displayed to users. This change currently only affects tables, and allows table rows to be modified, created or deleted on the local client and then all changes to be committed as a batch.
This will certainly improve performance in locations where users are working with large data sets in tables, and, its possible to leverage this functionality in other areas by simply hiding a table implementing this functionality.
This change is a significant enhancement to the workflow which signals that BMC are thinking carefully about client-server interactions in workflow. However to leverage the benefits of this change requires workflow redevelopment, and careful thought from developers;
- New entries and modified entries are not sent to the server immediately but sent in a batched update. Consideration needs to be given to how a failure to update data as part of a batch update, which is a single transaction, is handled. As usual with Remedy a failure in a transaction will cause the whole transaction to rollback. A developer needs to consider how this failure will be presented to users.
- Modified data is no longer sent to the server each time, any Filters which are responsible for data integrity may need to be duplicated in Active Links. If there are a lot of checks against data held in other Forms when modifying or creating table entries then moving these to the client may lose the network performance benefits of batch updates.
Window Opening Simplified
At some point in the history of the Window Open, Window Loaded and Display Active Link firing conditions, someone got very confused. Here's the explanation from BMC about how this ended up working in ARS 7.5;
When you opened a new window in Modify mode, seven sets of active links executed: Window Open (first of two), Window Loaded, Set Default, Search, Window Closed, Window Open (second of two), and Display.
This not only made developing workflow complex but also caused performance issues on the client when duplicate workflow fired unnecessarily. ARS 7.6 has a new Window Open mode called "Modify Directly" which will only trigger Window Open and Display. This will not change legacy behaviour of applications as this new mode has to be explicitly invoked. It seems curious that Window Open was chosen in preference to Window Loaded, but the rationalisation here is welcome.
API Get Set
After changing the values of an entry using this new API command, the data from the entry is automatically retrieved. This should have a minor network performance benefit in reducing the number of API calls.