Author: Robert Gideon

EPM/BI Consultant specializing in Infrastructure and support.

Kim Kardashian can get my Essbase server updates

I had the great pleasure of presenting at Kscope17 on the power of Essbase CDFs.  At the end of my CDF presentation this year, I gave a live demonstration of a little CDF that is designed to spark the imagination.

In 2009, Matt Milella presented on CDFs at Kaleidoscope and talked about the top 5 CDFs that his team had created.  At the end, he showed a very cool demonstration of how his Essbase server could send out a tweet using a CDF. This was an amazing display and really inspired me to figure out how to create CDFs.

So, as an homage to Matt’s blog post about how Ashton Kutcher can get his Essbase server updates, I have created an updated version of the Twitter CDF. As Matt states, he used JTwitter back in 2009.  Unfortunately for me, Twitter has long since changed their authentication to use OAuth for security which means that JTwitter doesn’t work anymore.

I did some searching and found Twitter4J, an unofficial Java library for the Twitter API. This library handles the OAuth authentication as well as allows submitting of new status updates, sending direct messages, searching tweets, etc. Between Matt’s original Twitter code, the Twitter4J sample code, and some trial and error, I was able to get the library setup and created a Java class that could send my tweets.

  1. The first step was to download the Twitter4J library.  I added the twitter4j-core-4.0.3.jar file into my lib folder in JDeveloper and added it to my classpath.
  2. Next, I had to setup a new Twitter account (EssbaseServer2).
  3. Then, I went to http://twitter.com/oauth_clients/new and setup my application to get the OAuth keys needed for my code to authenticate.
    TwitterApp
  4. Once I gathered the keys, I put them into a .properties file called “EssbasTweet.properties”.  This file will be placed onto my Essbase server into the %EOH%/products/Essbase/EssbaseServer/java/udf directory.  Placing the file into the …/java/udf directory puts it into Essbase’s Java classpath and Essbase will be able to access the file when its needed.
    propertiesFile
  5. Next, I wrote my code (based heavily on Twitter4J’s sample code), compiled it, deployed the code to a JAR and placed the JAR on the Essbase server.
    SourceCode
  6. I registered the CDF manually in EAS.
    RegisterCDF
  7. I was able to pretty much reuse Matt’s original calc script as he had it back in 2009 with the exception of using an @CalcMgr function instead of one of the older data functions.

Does it work? Well, go and check out the @EssbaseServer2 account for yourself.

While publicly tweeting your data might not be the best idea, hopefully this serves as a spark to ignite your imagination of the power of CDFs. Anything you can do in Java can be implemented in an Essbase calculation. Some attendees of my presentation were pretty excited about the possibilities of communicating with their users by submitting messages using Slack or updating a status on a SharePoint site. The possibilities are limited only by your imagination.

Thanks again to Matt for presenting on CDFs eight years ago. It definitely inspired me to learn more and hopefully this will inspire others to do the same.

There has been some uncertainty about the fate of CDFs with OAC and the Essbase cloud service, but never fear, CDFs are supported but they are limited to local CDFs. More on that in the future.

OAC Backup issue

I have found myself between projects for a couple of weeks which has given me a great opportunity to get hands-on with interRel’s OAC instance. It has been great to crawl around it, kick the tires, and get my hands dirty under the hood, so to speak.

One of the things we had issues with was running a backup at the service level. In OAC, there are two types of backups – service level and application level. The service backup is a full backup of all the runtime artifacts required to restore the service, such as the WebLogic domain, service instance details, and any metadata associated with your service. Basically, it’s like a snapshot of the VM that gets saved to your cloud storage instance (one of the prerequisites of OAC).

As I was playing with OAC and trying to figure out how to administer the product, I noticed that a monthly patch was available. Before applying the patch, I decided to take a service backup just to be safe in case anything happened during the patch.

From the OAC dashboard, I selected Administration. Then on the backup tab, I selected Backup Now.

OAC Backup Now

After 30 minutes or so, I came back to find out that my backup had failed. I attempted to run the patch anyway, but it does a backup first as well and so it failed again.

Eventually, I ended up submitting an SR with Oracle for help. Within about an hour or so, Oracle Support determined that it was likely that when we created our OAC database cloud instance that the USERS tablespace was not created.

My friend and co-worker, Wayne Van Sluys (http://beyond-just-data.blogspot.com/), ran into this issue at one of our OAC clients as well. Wayne sent over the information that I needed to get connected to our DBaaS instance via Oracle SQL Developer.

When you create an OAC service, one of the prerequisites is setting up the DBaaS service. The connection information you will need is accessible from the Database Cloud Service console. The Public IP address and connection string on the Service Overview page gives you what you need to know along with your “sys” schema name and password.

DBaaS connection info

In addition to this information, you also need to edit the Access Rules for the DBCS service to allow connections from outside on port 1521. I enabled this Access Rule for the service while I made the change.

DBaaS access rules

In SQL Developer, I was then able to set up the connection to our DBCS instance.

SQL Developer connection

With the connection made, I could then submit the SQL code to add the USERS tablespace using the “CREATE TABLESPACE” command. I will leave it to the reader to consult a DBA on the command and what options you should supply with the command.

After creating the USERS tablespace, the backups are now running successfully and I was able to apply the latest patch to our OAC environment.

Success

Success!

Calc Manager 11.1.2.4.010 Issue

I applied the Calc Manager 11.1.2.4.010 patch to a sandbox VM in anticipation of my upcoming Kscope presentation “Essbase CDFs Demystifyied.” As I was working on my CDF demos for this presentation, I found that the @CalcMgrMDXExport CDF was having an issue as my Essbase application started up:

[Thu Jun 15 12:45:49 2017]Local/Samp2///8632/Warning(1200490)
Wrong java method specification [com.hyperion.calcmgr.common.cdf.MDXExport.exportData(String,String,String,String,String,String,String,String,String,String,String,String,String,String,String)] (function [@CalcMgrMDXExport]): [specified method not found:com.hyperion.calcmgr.common.cdf.MDXExport::exportData(java.lang.String,java.lang.String,java.lang.String,java.lang.String,java.lang.String,java.lang.String,java.lang.String,java.lang.String,java.lang.String,java.lang.String,java.lang.String,java.lang.String,java.lang.String,java.lang.String,java.lang.String)]

This error is saying that the exportData method of the MDXExport class with 15 String input parameters is not valid. I peeked at the code and found that in 11.1.2.4.010, the exportData method is now looking for 19 String variables. It sounds like this was not planned, so we can look forward to a new @CalcMgrMDXExport-like CDF in the near future.

If you have already applied the Calc Manager 11.1.2.4.010 patch, you can apply a quick fix by changing the CDF registration and editing your calculation scripts to include four additional null strings at the end of your @CalcMgrMDXExport calls.

To fix the issue, you can run the following MaxL statement to register the CDF with the appropriate number of parameters:

create or replace function ‘@CalcMgrMDXExport’ as ‘com.hyperion.calcmgr.common.cdf.MDXExport.exportData(String,String,String,String,String,String,String,String,String,String,String,String,String,String,String,String,String,String,String)’ spec ‘@CalcMgrMDXExport(key,usr,pwd,file,app,db,svr,colAxs,rowAxs,sep,msg,Unique,Alias,supZero,rowHdrs,whereMDX,srcMap,tgtMap,supColH)’ with property runtime

I had to resort to shorthand on the “spec” field because Essbase only allows 128 bytes in that field if you register the CDFs through MaxL or EAS. I believe there may be more leeway for longer fields if you use the Java API to register CDFs.

After running the MaxL to register the CDF and restarting my application, it looks like all is well with the world:

[Thu Jun 15 15:47:08 2017]Local/Samp2///1960/Info(1200445)
External [GLOBAL] function [@CalcMgrMDXExport] registered OK

The additional fields needed for the @CalcMgrMDXExport method in this version of Calc Manager are as follows (in order):

  • String wheremdx
  • String srcMappings
  • String targetMappings
  • String supressColHeaders

The wheremdx field, if used, allows me to filter my results coming back from the source application. This field is optional and can be left null.

The srcMappings and targetMappings fields, if used, allow mapping members from source to target. This would allow me to map account 1234 to 4567 on the export by providing “1234” for the srcMappings field and “4567” in the targetMappings field. This field is also optional and can be left null.

The supressColHeaders accepts a string “true” or “yes” to suppress the column headers. Any other value (including null) will result in the output file containing the headers.

I have submitted an SR and expect a bug to be filed in the next few days. I’ll submit a new post once we have an updated Calc Manager patch that fixes this issue and includes a new @CalcMgr* CDF.

Do Oracle’s OOW cloud announcements signal the death of on-premises EPM?

This week at Oracle Open World, Oracle has announced more details around a few new EPM Cloud products (Essbase Cloud Services, PCMCS, and DMCS) in addition to their already existing stack of SaaS cloud services (PBCS, EPRCS, EPBCS, ARCS, and FCCS).
axeWith these new offerings added to their stable, is this the death of on-premises EPM as we know it?

Oracle’s stated direction of product strategy for the EPM products is to tap into unserved business users. EPM has been predominantly used by corporate finance departments from the beginning. At one point, Hyperion was marketed to CFOs and not CIOs because it could be run on an administrator’s computer under a desk without IT involvement. The evolution of EPM cloud is a return to the golden age of Essbase – easily created departmental applications that provide better analytic ability than Excel alone.

The EPM cloud products are really all about allowing easy adoption for non-traditional EPM users and providing rapid value to their customers. Spreadsheets still dominate at small to medium companies. The cloud offerings are really simplify life for those companies who struggle with maintaining servers and have a lack of technical skill at designing an optimized solution. With the EPM cloud products, it’s very easy to roll out a Workforce or CapEx application in EPBCS by sending out the URL and paying the monthly subscription fees. The cloud also allows the business users to be in the driver’s seat by not needing IT resources to get them up and running.

As we know, there is a long way to go yet on the EPM Cloud roadmap to get all of these products working well together. For instance, how exactly do we get data from our EPBCS application into ESSCS for additional reporting? How about my BICS dashboards using data from my ESSCS departmental cube or my PBCS budget data? It’s clear to see that with Oracle’s growth in the cloud and continued development of additional features and functions on the cloud products that these drawbacks will be remedied in time.

This whole cloud thing is just a fad, it will pass, right?

Even Mark Hurd stated during his keynote on Monday that the cloud is no fad, it’s a generational shift that is here to stay. Oracle has stated publicly that they fully intend to continue to support and develop EPM on-premises solutions. Matt Bradley, SVP for EPM and BI Development at Oracle, has said that Oracle expects most companies will enter into a hybrid cloud implementation if and/or when they decide to move their investments into the cloud. They have developed tools in DRM and FDMEE to support these hybrid cloud implementations. The shift to cloud computing is happening, but it doesn’t signal the immediate end of the line EPM on-premises. Once the cloud products have fully matured, there may continue to be valid use cases for on-premises EPM products going forward.

So, what is the future of my on-premises investment?

The market indicates that there is a healthy appetite for cloud solutions and all indications are that Oracle expects even large customers to eventually move their EPM investments to the cloud. While the on-premises products are still being developed, the availability of new on-premises versions has slowed down. For the last few years, we were blessed with several major releases of EPM software from 9.3.1 to 11.1.2.4. Oracle noted that the software release adoption cycle was about every two to three years, so we expect that the new software releases for on-premises will be more in line with those adoption cycles. We should expect to see some new features and functionality through Patch Set Updates to the latest code line in between major upgrades. Future on-premises releases will begin to showcase a simpler architecture to the components and focus on usability.

What should we do with our on-premises EPM environments now?

If you haven’t already upgraded to version 11.1.2.4, it is highly recommended. The 11.1.2.4 code line has some great features like better support for Essbase Hybrid Aggregation, improvements in HFM consolidations, FDMEE data synchronization between EPM applications, and the new Financial Reporting Web Studio. I have been on several calls with customers who are still working in old releases and the Classic Essbase add-in. It is time to move on and update those environments. If you have upgraded to version 11.1.2.4, it’s highly recommended to keep up with the latest Patch Set Updates on at least a quarterly basis. Sometimes applying the latest patches may cause some issues, so thorough testing of new patches is always recommended before implementing into production.

Staying on the latest release also allows companies to bridge from on-premises to cloud much easier. For example, as mentioned earlier FDMEE and DRM already support hybrid cloud implementations. Oracle has doubled-down at OOW 2016 on their assertion that cloud computing is the future. While on-premises EPM software isn’t going away any time soon, the cloud products are going to continue to mature rapidly. As the cloud products develop and integrations between them become more defined, more and more companies are going to see the benefits of moving their EPM investments into the cloud.

FDMEE and Essbase ASO clears

FDMEE to Essbase

Last month we covered FDMEE and Essbase (BSO) calculations. This month, let’s take a look at FDMEE integration with ASO. With BSO, we set up calculation scripts to do a clear before the data load and an aggregation after the load. With ASO, there are no calculation scripts so we can’t use the same functionality.

Partial clears in ASO can be done using MaxL, the Essbase scripting language. Those of you familiar with Essbase already know, an aggregation is not needed after the a data load to ASO as those cubes dynamically aggregate.

There are several ways to accomplish these clears, most of them revolve around using Event Scripts in FDMEE. Event Scripts are written in Jython or Visual Basic. Jython is an implementation of the Python programming language, designed to run on the Java platform. Jython is a more powerful language than VBA and it’s fairly easy to learn and work with, so that is what I use when writing Event Scripts.

We have many intervals in the FDMEE integration process where we can insert custom code into an Event Script. Each FDMEE process has a before and after script that can be customized. Since we want data to remain in our Essbase ASO application for as long as possible, we will use the BefLoad script to run our clear process.

It’s possible to call a batch file that will execute your MaxL script to run the clear, but I like to call MaxL directly from Jython. This method requires that the Essbase Client is installed on your FDMEE server so that the startMaxl.cmd script is available. Of course, we should be using encrypted MaxL scripts so that our passwords are not visible in either the Jython BefLoad.py script or in our MaxL script.

In this hypothetical situation, our Club Electronics stores in Delaware have submitted their ATM Sales. Lets say that Club Electronics submits a new file each day to update our ASOSamp application with the month-to-date sales numbers. To make sure that we are loading the correct data each time, we need to clear the existing Delaware ATM sales for Club Electronics for the current month and current year.

This scenario could be accomplished by hard coding values in for Delaware in MaxL, but we have other states that submit similar files using different locations in FDMEE. In order to make our clear script usable by multiple stores and entities, we can pass variables using Jython to MaxL to dynamically clear portions of our ASO cube based on the location (or data load rule, or integration option set in FDMEE, or many other variables).

So, let’s begin in FDMEE with our file integration using ASOSamp as our Target Application. I have already set up ASOSamp as a Target Application, created our Import Format for a comma delimited file, created our Location for Club Electronics ATM Sales (CE_ATM_Sales), and created the Data Load Rule to load this data.

MaxL Script

Our MaxL script accepts three parameters: Geography, Month, and Stores. We have our MaxL encrypted so that no passwords are stored in plain text. The trick to getting this to work, I have found, is using double quotes around the MDX expression in the MaxL statement. This allows MaxL to properly evaluate the variables. You can hard code some or all of the MDX tuple, I did a little of both here.

ASOSamp MaxL Clear.csv

Jython BefLoad.py Script

In the BefLoad script, we need to test and make sure that which FDMEE Location is being loaded to ensure we are running the proper code. This can test can also be done at the Load Rule level, if you have multiple rules in the same location. Next, the script calls startMaxl.cmd which is installed as part of the Essbase Client installation and passes the variables to the MaxL script.

ASOSamp BefLoad

Passing Variables

The trick to getting all of this to work is the ability to pass variables; either dynamic variables that come from FDMEE (Location name, POV month, etc.) or static variables that we have coded into the Jython script. In the example above, I show how to pass a variable with a space from Jython to MaxL. By escaping the double quote (“) with a backslash (\), we are able to pass the variable from Jython to the Windows Command prompt surrounded in double quotes (“).  Without the escape character, the variable will not get passed correctly.

Logs

In our FDMEE process logs, we can see that the code is running properly thanks to the fdmAPI.logInfo lines we added to the BefLoad script:

2016-07-22 17:53:11,871 INFO [AIF]: Executing the following script: C:\FDMEE/data/scripts/event/BefLoad.py
2016-07-22 17:53:11,923 INFO [AIF]: ======================================================================
2016-07-22 17:53:11,923 INFO [AIF]: BefLoad Script: Begin
2016-07-22 17:53:11,923 INFO [AIF]: ======================================================================
2016-07-22 17:53:11,923 INFO [AIF]: ======================================================================
2016-07-22 17:53:11,923 INFO [AIF]: Submitting MaxL to selectively clear ASOSamp
2016-07-22 17:53:11,923 INFO [AIF]: ======================================================================
2016-07-22 17:53:13,141 INFO [AIF]: ======================================================================
2016-07-22 17:53:13,141 INFO [AIF]: MaxL commands to ASOSamp were successful
2016-07-22 17:53:13,141 INFO [AIF]: ======================================================================
2016-07-22 17:53:13,141 INFO [AIF]: ======================================================================
2016-07-22 17:53:13,141 INFO [AIF]: BefLoad Script: END
2016-07-22 17:53:13,141 INFO [AIF]: ======================================================================
2016-07-22 17:53:14,825 INFO [AIF]: EssbaseService.loadData - START

In Essbase, we can also verify that the MaxL code is executing properly by checking the ASOSamp application log:

[Fri Jul 22 17:53:12 2016]Local/ASOsamp///6600/Info(1013210)
User [admin@Native Directory] set active on database [Sample]

[Fri Jul 22 17:53:12 2016]Local/ASOsamp///6544/Info(1042059)
Connected from [::1]

[Fri Jul 22 17:53:12 2016]Local/ASOsamp/Sample/admin@Native Directory/6544/Info(1013091)
Received Command [AsoClearRegion] from user [admin@Native Directory]

[Fri Jul 22 17:53:13 2016]Local/ASOsamp/Sample/admin@Native Directory/6544/Info(1270602)
Removed [25] cells from input view. Partial Data Clear Elapsed Time [0.258334] sec

[Fri Jul 22 17:53:13 2016]Local/ASOsamp/Sample/admin@Native Directory/6544/Info(1013273)
Database ASOSamp.Sample altered

[Fri Jul 22 17:53:13 2016]Local/ASOsamp///7080/Info(1013214)
Clear Active on User [admin@Native Directory] Instance [1]

With the ability to call MaxL directly as part of FDMEE scripts, your integration is only limited by your imagination. To take this post another step further, you might decide to update substitution variables in Essbase based on the FDMEE POV that is being loaded or maybe even build aggregate views using MaxL in the AftLoad.py script without much additional effort.

FDMEE and Essbase calculations

Since January, I have been on a project using FDMEE against Essbase and Planning applications. There are certainly many more companies using FDMEE with HFM, so I wanted to share some things I have learned about FDMEE and Essbase.

The topic of this post is about the process of running calculations in Essbase. Typically with Essbase data loads, especially repetitive loading of data during a close cycle, we will want to clear out the data first and then load in the new data. Of course, after that data loads to Essbase (BSO) we will likely need to run an aggregation after the load.

Lucky for us, Oracle baked in the ability to run calculations on your Essbase Target Applications at certain points during the load process. In this post, we will cover how to set up a clear script before a data load, but an aggregation script after the load is the same process with a different script.

First, we need an Essbase calculation script to run a CLEARDATA command. For this occasion, I wrote a quick little CLEARDATA script to clear out Sales in California for whatever period I happen to be running in FDMEE. I have saved the script in the Sample.Basic application as “fdmClear”.

SET CALCTASKDIMS 4;
SET CALCPARALLEL 4;
SET UPDATECALC OFF;

SET RUNTIMESUBVARS
{
POVPer = "";
};

FIX(&POVPer, "Sales", "California")

CLEARDATA Actual;

ENDFIX

Notice the use of RUNTIMESUBVARS for my POVPer variable. By defining these in our calculation script, FDMEE will be able to pick up the variable and pass a value when it runs the calc script.

To set up these calculations, we need to be on the Setup tab of FDMEE and have the Target Applications window up. If we switch to the Calculation Scripts tab, we can add our scripts.

EsbCalcs1

We need to click the Add button to set up our clear script. I have entered in the “fdmClear” name for the script and clicked on the pencil icon to add my calc script parameter. In the “Set Calculation Script Parameters” window, I clicked the magnifying glass icon. This forces FDMEE to look back at the calc script and pick up the POVPer variable.

EsbCalcs2

Now we have a choice as to what value to assign to our POVPer variable.  In our simple use case, we will use the POV Period option and click OK.

EsbCalcs3

Next, we can select the appropriate scope for the calculation. For instance, if we have a clear for this application and it only affects a particular slice of data, we would want to use a more granular scope like data rule or location. Script scopes with higher granularity override those with lower granularity. So, if we have multiple scripts with different scopes, the most granular one will take effect (data rule scope overrides a location scope, location scope overrides category scope, and category overrides application).

EsbCalcs4

For any selections other than Application, a Scope Entity box will appear and allow you to specify what rule, location, or category to use. I chose Location and selected my Loc1 sample location.

The next choice is to decide when to execute the calculation: before the data load, after the data load, before the check process or after the check process.  In my case, I selected Before Data Load.

EsbCalcs5

Finally, if we end up with multiple calculations with the same script scope, scope entity, and event, we can specify a Sequence value to order the calculations appropriately. I only have one calculation, so this is left null.

Calculation Script Parameters

I was curious if a regular substitution variable would work, so I created a Market subvar (mkt) on the Sample.Basic application and a Measures subvar (acct) as a global variable to test.

EsbCalcs6

EsbCalcs7

No dice. When we attempt to add our substitution variables in the parameter names window, FDMEE is only able to retrieve the RUNTIMESUBVARS defined within the script.

EsbCalcs8

Getting Started with FDMEE (for on-premises and hybrid cloud implementations)

Ever since Hyperion Application Link (HAL) died, people have looked for ways to help them load data and metadata into their Oracle EPM applications. ODI became a favorite tool of several developers due to its flexibility and ability to load to Essbase, Planning, and HFM. However, a complicated and in some cases very expensive licensing structure makes ODI less appealing today as it has been in the past for EPM practitioners.

You may be thinking, “But I get to use ODI for free with my Planning Plus/Financial Management Plus licenses.” That is true; however, according to the EPM Licensing Guide, the Restricted Use License for ODI only allows the use of the embedded agent within the ODI Studio. It does not allow for the use of standalone agents. Standalone agents are the key to running the ODI scheduler, so you are effectively limited to running your ODI scenarios manually from ODI Studio.

As you may know, FDMEE was released in the 11.1.2.3 version and has really blossomed in the 11.1.2.4 release.  As Oracle has stated on their road map, FDMEE will become the favored solution for loading data into and synchronizing data between EPM applications. That transition is well on its way.

So, let’s pretend that you are a customer who has “seen the light” and purchased FDMEE licenses (and at least one adapter license) and are now ready to implement. Where do you begin after installation?

Generating the System Folders

Setting up the FDMEE file structure should be one of the first things that you do in a new implementation of FDMEE.  This allows the importing of files into FDMEE and sets up all of the folders needed for FDMEE to work properly.

From inside FDMEE, click the Setup tab.

FDMEESetup1

Next, click the System Settings link on the left side.

FDMEESetup2

Enter your application root folder directory. C:\FDMEE is my directory on this sandbox VM as all of my products are deployed to one server. You may use a UNC path for the application root folder instead of an absolute path. Click on Create Application Folders.

FDMEESetup3

A confirmation message is displayed:

FDMEESetup4

Period Mapping

Before we get too excited about setting up source systems and target applications, I like to make sure that my POV has all of the required elements first. It’s pretty easy to forget the period or category mapping when we are first setting up FDMEE and then we’ll get all sorts of strange messages when trying to run our first integration.

Before setting up your period mapping, we need to take a look at the applications that we are going to be integrating with. What is the most granular period of time for all of your applications? Is it Monthly, Weekly, Daily, or something else entirely? The most granular period level determines how your Global Period Map is set up.

At a client of mine, they have one daily application, several monthly applications, and one yearly application.  That forces us to create a period mapping for every day in the Global Period Map. We then use an application mapping to choose the first of each month as a valid period for our monthly applications. The yearly application similarly has the first day in their fiscal calendar for each year in the application map.

In FDMEE 11.1.2.4, Oracle added the Excel Interface which allows us to download data from the FDMEE tables into Excel. This is a great way to fill out your Period Mapping data without painfully entering each day into the FDMEE interface. I will typically start by adding a couple of periods into the Period Map manually to seed it with some data before exporting it to Excel.

To seed the Period Map, select the Period Mapping link on the Setup tab in FDMEE.

FDMEESetup5

Click the Add button to add the period maps.

FDMEESetup6

Enter the appropriate data for the Target Period and Year Target columns. Click Save once complete.

FDMEESetup7

Notice the Period Name field is Month-Year. This is due to a not very well documented “feature” that requires Period Names to not include spaces if we are going to use the FDMEE batch script utilities to kick off data load rules.

To fill out the rest of FY16, let’s use the Excel Interface. Click on the Excel Interface link on the Setup tab of FDMEE.

FDMEESetup8

Open the Entity Type drop-down box and select Period Mapping.

FDMEESetup9

Enter a file name and click the Download button. FDMEE will append the .xls extension for you.

FDMEESetup10

The file will be saved in your FDMEE application folder location.

FDMEESetup11

Next, I will open the Period.xls file that I saved and drag the PERIODKEY column down to auto-fill the rest of FY16.

FDMEESetup12

I do the same for the other columns with data, but we ran into a problem with the text field PERIODDESC.

FDMEESetup13

To fix that, I write a quick formula using the TEXT() function to build the correct PERIODDESC based on the PERIODKEY field. These functions come in very handy if you are working with a daily Period Mapping table.

FDMEESetup14

Next, copy the formulas and paste the values over the incorrect PERIODDESC entries.

FDMEESetup15

The Excel Interface files have a named range in them that corresponds to the FDMEE repository table name. Any time that I have inserted records into a table like this, the range always seems to be thrown off. The last step is to fix the named range and make sure that it includes your newly added records.

In Excel, click on the FORMULAS ribbon and click the Name Manager button to edit the named range.

FDMEESetup16

Edit the range to include all of the data cells and the header records.

FDMEESetup17

Save your file back to the FDMEE application folder location. Now, we need to upload the file to FDMEE to insert our Period Mapping records.

On the Excel Interface page, under the “Upload From Excel” box, click the Select button to browse to our Period.xls file and click OK.

FDMEESetup18

Click the Upload button to upload the file. There is a log displayed on the Excel Interface screen that shows the status of your actions.

FDMEESetup19

Now, browsing back to the Period Mapping task, we can see that our periods were imported properly.

FDMEESetup20

Adding additional periods is an ongoing maintenance item that an FDMEE administrator will need to do periodically. The Excel interface only allows adding new data to the TPOVPERIOD table, it does not allow you to update existing data unfortunately.

Category Mapping

The final bit of setup for this post is the Category Mapping. The category maps are much less dynamic than the Period Mapping and usually will not require as much maintenance.

For the categories (or scenarios), I typically add them manually through the FDMEE user interface. Simply click on the Category Mapping task in the FDMEE Setup tab and add the categories (or scenarios) as needed. Here I created a Budget category that will map to the “Plan” dimension in my application.

FDMEESetup21

These are some of the first steps that I would typically take when getting started in FDMEE. For now, this is where I will leave you. There are many more things to do such as set up your source and target applications and create import formats, locations, data load rules, and data maps. The FDMEE administration guide is an excellent resource for the other tasks as well as scripting in Jython. I recommend bookmarking the EPM Documentation site it as you will use it often.

The times they are a-changin’

Collaborate ’16 has come and gone. I was fortunate enough to be able to join in on Monday of the conference for the OAUG Hyperion SIG meeting. As part of the conference, I was able to join a couple of road map sessions and speak directly with Oracle EPM marketing folks; it is clear that we are in the midst of change.

To be clear, I do not mean to directly equate today’s technological shift with the political, social, and cultural upheavals of the early 1960s. To do so would be a slap in the face to the many heroes of the time, especially those that gave their lives for equality. Bob Dylan created an anthem of change and expressed his feelings in generalities that can fit many circumstances. That is what makes his song relevant for so many even today.

I was recently watching a webcast replay by Joe Aultman, a guy who I really respect and consider a friend, about Essbase and whether it belongs under the control of IT or the business. Joe gave this presentation live during KScope 15 for those lucky enough to be there. During the presentation, Joe went through an excellent history of Essbase.  He spoke about how it started as a tool that was sold to Finance and sat on a server under someone’s desk at many companies. How exciting those early days were.

I have been working with Hyperion directly for more than ten years. Before that, I worked closely with the Hyperion Enterprise administrator and he showed me a few things so I could back him up. We ran Enterprise on a Citrix server, but we pretty much had full control of the box with very little IT intervention at all. It was fantastic! Thankfully we didn’t ever need a whole lot of help with that environment.

About a year later when I started with Essbase, a co-worker showed me how to install Essbase 6.5.4 on my desktop so that I could play with an outline to replace that old Enterprise application. Our installation in production was version 9.3.1, though. We had IT to manage our servers now. I had to practically beg to get a batch script scheduled on the server for nightly backups of our data.

With the complexities of the EPM System today it’s hard to imagine anyone implementing Essbase without IT support. With Oracle’s EPM and BI Cloud Services, what’s old is new again. Departmental applications can once again flourish as the Enterprise Planning Cloud and the upcoming Essbase Cloud Service offerings reduce the rigidity of IT and allow the business to control their applications.

There is no doubt that Oracle is making a strong push for customers to join the cloud revolution. That push is strong enough to make infrastructure consultants like me who are heavily reliant on the on-premises software model take notice. So, what is an infrastructure person to do? Diversify.

Luckily, I started in Hyperion with an application background. I learned Essbase inside and out at Wells Fargo Home Mortgage, then I went into consulting and learned Planning, FDM, ODI, infrastructure, OBIEE, enough to be dangerous in HFM, FDMEE, etc. I think the real sweet spot for developers like me is in data integration. The technology will keep changing, but we are always going to need to move data from one application into another.

As the underlying technology changes, this blog must also change to reflect the current trends and hopefully help anyone who stumbles across this site. Honestly, this blog was never about one particular thing. While I have primarily focused on infrastructure, there are a lot of areas in the EPM and BI world to be explored.

Come gather ’round people
Wherever you roam
And admit that the waters
Around you have grown
And accept it that soon
You’ll be drenched to the bone
If your time to you
Is worth savin’
Then you better start swimmin’
Or you’ll sink like a stone
For the times they are a-changin’.

-Bob Dylan, The Times They Are a-Changin’, 1964

RedCloud

Pi Day

PiDayToday is March 14, written as 3/14 in the United States, and also known as pi day for us Yanks. At interRel, it’s known as Dr. Troy’s Pi Day.

Dr. Troy Seguin, PhD, – he’s not a real doctor (insert wink emoticon here) – is a good friend of mine and a colleague at interRel. Last year on the “ultimate Pi day” (3/14/15) , interRel declared that we would forever name pi day as “Dr. Troy’s Pi Day“.

interRel asked Troy what charity he would like to support with a donation in his honor and he chose the National Girl’s Collaborative Project (NGCP), an organization that encourages girls to pursue careers in science, technology, engineering, and mathematics (STEM). I found this to be very cool because Dr. Troy has three handsome and very mischievous sons.

Dr. Troy is a pretty cool guy, despite loving mathematics so much. We talked once about how he would love to make math more accessible for underprivileged kids. I just thought it was neat that a guy who is the father to boys would care so much about making sure that girls have an easier time getting into a STEM field.

Anyway, in honor of Dr. Troy and since there is no better day to promote a STEM opportunity to young ladies, I wanted to share a quick article that I saw last Friday. I have a daughter that is very STEM-minded and a huge fan of Marvel and the Captain America movies, so this article was very interesting for my family.

On Friday, USA Today announced that, “Marvel is partnering with the National Academy of Sciences’ Science & Entertainment Exchange for a program in which girls ages 15 to 18 and in grades 10 through 12 can submit projects they believe can change the world with a short video that demonstrates their idea and explains its far-reaching potential.”

Through this contest, one lucky young lady can win an internship at Marvel Studios. The bad news for my daughter is that she’s just a couple of years too young to enter.

Happy Pi Day to everyone celebrating and thank you to Dr. Troy; not only for putting up with my critical grading of your calc scripts but also for your love of mathematics and desire to share that love with others to make this world a better place.

CDF Series Part 3: Creating CDFs

If the prebuilt CDFs from Oracle that we discussed last time don’t meet your needs, it is also possible to create your own Custom Defined Function. Using a Java integrated development environment (IDE) is the simplest method for beginners, although it is possible to create a JAR using nothing more than a text editor and command line interface with Java.  There are several IDEs available for Java; however, I prefer to use Oracle’s JDeveloper as it makes doing this work very easy.

JDeveloper

JDeveloper makes Java development and deployment to JARs very easy. I am currently using JDeveloper 11.1.1.9 because it is compatible with both Java 1.6 and 1.7.  The EPM System 11.1.2.4 is shipped with Java 1.6, so using some version of Java 6 will prevent any errors due to incompatible code.

The software can be downloaded from the Oracle Technology Network. For just programming Essbase CDFs, select the Oracle JDeveloper Java Edition download. Unzip the downloaded files to the C:\ drive for simplicity, and it will create a C:\jdeveloper folder. At the first startup, point JDeveloper to a JDK installation.  The Oracle EPM JDK can be used at C:\Oracle\Middleware\jdk160_35 to ensure compatibility.  Any Java 6 JDK could be used and later versions are recommended due to security vulnerabilities in the lower Java 6 releases.

Next we need to create a generic Java project and add a new Java Class.  Here we will enter our code.  When writing a Java CDF that may be called using the RUNJAVA method, we need to add in an additional parameter to the “main” method.  Typically in Java programming the main method only accepts a String array (String[]) called “args”.  With Custom Defined Functions, we must accept a calculator context parameter as well as our String[] args. The main method description will appear similar to:

public static void main(com.hyperion.essbase.calculator.Context ctx, String[] args) {
   doSomething();
}

By adding the calculator context to our Java Class, JDeveloper will give errors as it is unable to find the definition for the context object. In order to properly compile our Java Class into a JAR, we must first make JDeveloper aware of the context object that is part of the Essbase server in a file called Essbase.jar.  To add Essbase.jar into the Classpath for our code, right-click the project in JDeveloper and select Project Properties… as shown below:

Jdev1

Next, navigate to the Libraries and Classpath option of the Project Properties and click the Add JAR Directory button.  Browse to the Essbase.jar file and click Select.  If done properly, your screen will be updated showing the Essbase.jar file as a Classpath entry as shown here:

Jdev2

To deploy our Class to a JAR, we must first create a deployment profile in JDeveloper.  To create a deployment profile, right-click the project, select Deploy >> New Deployment Profile… as shown below:

Jdev3

In the New Gallery dialog box, select Deployment Profiles then select JAR File in the right pane like this:

Jdev4

In the Create Deployment Profile – JAR File dialog box, enter a name for the Deployment Profile.  This will become the name of your JAR file.

Jdev5

In the JAR Options dialog box, select the checkbox to Include Manifest File (META-INF/MANIFEST.MF) and browse to select the main class.

Jdev6

Next, to deploy our Class to a JAR file, simply right-click on the project again and navigate to Deploy >> ProfileName, then click the Finish button to deploy the Class to a JAR file.

Jdev7

Now that we have our JAR file, the next steps are to copy the JAR to our Essbase server and register the function in EAS or by using MAXL.

RegisterCDF

After restarting Essbase, we should then be able to use our new code in an Essbase calculation script:

SET UPDATECALC OFF;
"Budget" = @CREATEBLOCK("New York"->@RELATIVE("Product",0));
FIX ("New York", "Sales", "Budget",@RELATIVE("Product",0))
"Jan"(
  @INT(@Random()*100)+1;
)
ENDFIX
AGG (Product, Market);

 

CDF Tips for Java Developers

Custom Defined Functions should be programmed using static methods in Java.  These classes typically are not be overridden and are used as utility classes, so using static methods is the best practice.

CDF methods can accept any number of parameters. These parameters may use any of the following data types:

  • boolean
  • byte
  • char
  • hyperion.essbase.calculator.CalcBoolean
  • float, double
  • lang.String
  • short, int, long
  • arrays of any of these types

The CalcBoolean object is defined in the Essbase.jar file and is equivalent to the Essbase calculator’s Boolean function and has values of TRUE, FALSE, or #Missing.

Any return values to Essbase can be void, or any of the above data types. Returned data is mapped to an Essbase-specific data type.  String return values are mapped to a String. Boolean return values are mapped to the CalcBoolean data type.  Any other return value is mapped to a double data type.

CDFs must start with the @ symbol, the rest of the name can contain letters, numbers, or the following symbols: @, #, $, and _.  CDF names must not include spaces.

If multiple users can run the same CDF at a time, the variables can be shared and overwritten.  For thread safety, I have used the synchronized attribute on methods in Java with success.

Any passwords or other properties that may need changed can be stored in a *.properties file in the %EPM_ORACLE_HOME%\products\Essbase\EssbaseServer\java\udf directory for global or server-wide functions or they can be placed in the %EPM_ORACLE_INSTANCE%\EssbaseServer\essbaseserver1\app\ApplicationName directory.  Your methods can then use the *.properties file as a resource and read the values at runtime.  This is a better method than saving passwords directly in your Java code.

Custom Defined Functions open Essbase calculations up to a world of possibilities. There are several prebuilt options from Oracle that may meet any needs outside the traditional Essbase calculator engine. If the existing prebuilt code does not solve your specific need, custom code can easily be implemented to solve almost any calculation need in Essbase.