Showing posts with label task flows. Show all posts
Showing posts with label task flows. Show all posts

Thursday, 13 October 2011

PageFlowScope with Unbounded Task Flows: the magic sauce for multi-browser-tab support in JDeveloper ADF applications

Within JDev 11g+ experienced ADF programmers will be familiar with PageFlowScope beans used by tasks flows, in particular Bounded Task Flows (BTFs) where they provide the equivalent of session scope for variables for the life of the BTF for a specific user session. Indeed the Oracle documentation says the following about PageFlowScope beans:
Choose this scope if you want the managed bean to be accessible across the activities within a task flow. A managed bean that has a pageFlow scope shares state with pages from the task flow that access it. A managed bean that has a pageFlow scope exists for the life span of the task flow
Source: Oracle Fusion Dev Guide 11.2.1 Section 18.2.4 What You May Need to Know About Memory Scope for Task Flows

Given we know BTFs have a distinct beginning and end for each user session, a "life span" as such, and conversely Unbounded Task Flows (UTFs) live for the life of the application which is nearly forever, it would appear that PageFlowScope beans only apply to BTFs. However PageFlowScope beans provide some magic sauce with UTFs that shouldn't be ignored. Before we can have a look at this magic sauce we need to cover some background on modern browsers.

Multi-tab browsers and the challenge for web applications

Readers will be familiar that over the last several years browsers have increased in sophistication, providing users with more and more features. One such feature is that of tabs, more commonly referred to as multi-tab browsing. Back in the dim dark ages of the Internet (circa 2005?) if users wanted to surf more than one website at a time they needed to open multiple instances of their browser. Typically each browser instance took out a single connection and server-side session (assuming a stateful application) with whichever server they were visiting. If the user had multiple instances open to the same website this resulted in the same amount of connections and sessions.

At some point in time web browsers introduced the support for tabs, allowing within the one browser instance the user to surf multiple websites in separate tabs. I'll take a guess and say the browser authors when introducing this feature had a careful think about how users visit websites, and they realized users when searching for information on websites might spawn several tabs all visiting the same website but each tab viewing different pages within that single website.

So browsers introduced a feature set to give users the ability to search for information even faster, yet the browser vendors also recognized an issue. Potentially users were now relatively free to spawn lots of tabs (and if you're like some users you keep on spawning tabs, never closing any, until you shut down the browser). If the old regime of a connection per tab was followed at the client (browser) side, and a session per connection at the server (website) side, computing resources would be strained.

How to fix? Simple really, per website on the browser side, regardless the number of tabs open to a website via a browser, it should share the connections across the tabs. Result? Instant resource saving on the client. In turn on the server side as there isn't any default way to identify the different tabs as their separate requests hit the server through the same connection, the server too need only store 1 session.

Today from the users' perspective (at least the tech savvy users) multi-browser tabs is an expected feature and one when not available for whatever reason causes frustration either with the browser or the website they are using. This implies any web application we build really needs to support (or at least not hinder) this functionality.

What's this got to do with ADF?

The question is probably rhetorical for readers at this point, but what's this got to with ADF?

Traditionally JavaServer Faces (JSF) programmers and ADF programmers have been taught if you want to maintain state (variables) for the life of a user's session you put it in a SessionScope bean. Examples of such variables include the time the user first accessed the system, the customer ID of the current customer that the user is working with on the phone, or the items in a shopping cart. Definitely these should go in SessionScope.

Or should they?

With modern browsers what should happen with these variables when the user spawns two tabs to our application sharing the same connection/session? Looking at our previous examples it's easy to agree the time the user first accessed the system would be one and the same across both tabs. But what about our other examples?

For our customer ID example, reasonably in some applications you might want two different tabs to show two different pages for information on the same customer, so SessionScope seems reasonable enough. But alternatively imagine you're a call centre operator, taking a call from one customer while finishing recording data about the previous customer. It would be mighty handy to have two browser tabs to do this. As a result the separate tabs really require their own SessionScope customer ID. How to do that?

The question gets murkier when considering shopping carts. For an Amazon user having two separate carts on two separate browser tabs would seem undesirable. But let's take another call centre example where an operator is taking a phone call from a customer wanting to place two separate orders. Things get tricky if the customer is moving items between the orders, so different tabs supporting different orders could be desirable. Again the vanilla SessionScope bean won't solve this.

So we can see sometimes depending on the needs of the application, we need to provision session state for separate browser tabs. At the moment JSF1.X/2.0 has no inbuilt solution for this problem (there's been much discussion about including a new ConversationScope bean in the past), but as you can probably guess a PageFlowScope at the UTF level in ADF solves this problem. For every browser tab opening a page in the UTF that references a managed PageFlowScope bean a separate instance of the bean will be created.

The hint of the secret life of the PageFlowScope is revealed in Section 5.7 Passing Values Between Pages of the JDev Web Guide 11.1.2 where it states:
The ADF Faces pageFlowScope scope makes it easier to pass values from one page to another, thus enabling you to develop master-detail pages more easily. Values added to the pageFlowScope scope automatically continue to be available as the user navigates from one page to another, even if you use a redirect directive. But unlike session scope, these values are visible only in the current page flow or process. If the user opens a new window and starts navigating, that series of windows will have its own process. Values stored in each window remain independent.
Specifically note the last 2 sentences.

How does ADF technically solve identifying the separate tabs?

Earlier on we mentioned from the server's point of view, because multi-browser tabs share the same connection with the server, by default the server has no inherit mechanism to differentiate the separate browser tabs and therefore no mechanism to know when to spawn separate PageFlowScope beans. So how does ADF actually technically solve identifying the separate tabs?

Imagine you've created an ADF application to lodge infringements. Some infringements take seconds to fill out and complete, others take time to gather the data, requiring multiple tabs to enter more than one infringement at a time.

For such an ADF application we could serve the service via a link on our portal such as:

http://www.wewantyourmoney.com/infringements/ticket

In this case ticket represents a JSPX file in our UTF of our application.

On accessing the URL for the first time regular ADF users will know the server in responding with the required page will also replace the URL with something like the following:

http://www.wewantyourmoney.com/infringements/ticket?_adf.ctrl-state=p1zuym5lv_3

This parameter and its value is also buried in the HTML source to be used on the next request. Also inside the page source is a form parameter Adf-Window-Id with a unique value given by the server.

When the page is submitted along with the hidden _adf.ctrl-state and Adf-Window-Id parameter gained from the previous server response, this gives ADF a mechanism for tracking the current session and current page instance.

(Side note: Behind the scenes the server is smart enough to check the session parameters against the previous known connection/session to stop intruders impersonating another user's session ... you can test this by intercepting the next request before it goes out and changing the _adf.ctrl-state parameter before it hits the server. ADF will complain displaying the following error message "ADFC-12000: State ID in request is invalid for the current session.")

If the server receives another request for the same "naked" URL from the same connection/session, ADF simply assumes a new tab instance, spawns a separate PageFlowScope bean instance, and returns separate _adf.ctrl-state and Adf-Window-Id values in the response. Any subsequent requests to the server from separate multiple tabs for the same user connection are therefore easily separated and the correct PageFlowScope bean instance used.

Alternatively if the user tries to trick the server by copying the URL from another tab with the _adf.ctrl-state variable in the URL, but obviously missing a payload containing the _adf.ctrl-state form parameter and the Adf-Window-Id parameter, again ADF is smart enough to detect this as a new browser tab and spawns a new PageFlowScope bean. This piece of logic is important because it's possible for users to bookmark the current page's URL with the URL parameter, and attempt to come back to it after a spawning a new tab. As such we wouldn't want ADF to be tricked by this simple common use case into thinking the user is in the same original tab when in fact they're launching the application from an old bookmark.

Finally it should be noted if the user session times out or logs out, the PageFlowScope bean will fall out of scope and any reference to the PageFlowScope there after will result in new instance of the bean created.

Demonstration

The following link provides a small demo application from JDev 11.1.2. To set this application up you need to:

a) unzip it and open it in JDev
b) in your integrated WLS server create a user account which you will use to log in to the application

The application is comprised of 4 pages:

a) an unauthenticated Splash page to start the application
b) 2 separate authenticated pages named FirstPage and SecondPage that will demonstrate the bean scopes during an authenticated session
c) an unauthenticated ExitPage which will be called when the user logs out of SecondPage

From here run the Splash.jsf page. Your browser will eventually open with the following URL:

http://localhost:7101/MultiBrowserTabExample/faces/Splash

Select the Go First Page button, at the login enter your credentials, upon which you'll see First Page:

The two fields have default values gathered from two separate beans, one a SessionScope bean and the other a PageFlowScope bean. Note in the JDev log window you'll see log entries from the constructors of both beans, implying they were just instantiated to show the values on the page, and thus why the default values are shown.

Selecting the Go Second Page button takes us to the Second Page where we have the option to change the values:

As example lets change the value for the SessionScope to Alpha and the PageFlowScope value to Beta:

On returning to the first page through the associated button we see that Alpha and Beta values have been successfully carried across requests

Now open a new browser tab to original Splash page using the following URL:

http://localhost:7101/MultiBrowserTabExample/faces/Splash

Diagrammatically to show this, what I've done in the following set of pictures is put the original tab to the left of the image, and the new tab to the right of the image so I could show both tabs at the same time. As such we now have:

In the second tab if we then select the Go First Page button we bypass the login screen automatically as the user is already logged in, and we arrive on the SecondPage as follows:

In the JDev log window we see a new instance of the PageFlowScope bean has been created, but not the SessionScope bean. This backs up what is shown in the new second tab because we can see that the SessionScope value is shared across both tabs, but the PageFlowScope value isn't and we've reverted to the default value (provided through the new PageFlowScope instance).

From here in the second tab we can navigate to the Second Page and update the values of the SessionScope and PageFlowScope beans to Charlie and Delta respectively:

In the second tab if we then return to the FirstPage we see that it maintains the Charlie and Delta values:

...and more importantly in the first tab if we then navigate to the Second Page we see:

In this picture we can see that the SessionScope Charlie value has been shared across both tabs, but the first tab has retained it's original Beta values for it's PageFlowScope bean, showing that the PageFlowScopes are indeed separate across browser tabs.

To conclude the examples, if in the first tab we select the Logout button we automatically move to the ExitPage and we see:

This shows both the SessionScope and PageFlowScope bean resetting back to the original values. The JDev log window also logs the constructor calls of both beans verifying that the old beans have been completed and 2 new instances created.

Finally if we return to the second tab that is still sitting on the First Page, any action results in ADF throwing an exception "java.lang.IllegalStateException: No window for windowId:w2". To be honest I expected an error here because the user behind the scenes has logged out, but this error message looks less than useful. Presumably we need to take care of this specific error in some manner, probably with a task flow Exception handler, but is beyond the scope of this post.

Not all things are born equal - final note on SessionScope vs PageFlowScope

Don't get confused on the use of SessionScope and PageFlowScope beans. Not everything you would have traditionally put in SessionScope need now go in a UTF PageFlowScope bean. How many frequent flyer miles the user has left, the user's preferences and their birthday are good candidates for SessionScope. But if across tabs you will allow the user to test separate values for projecting sales figures, a counter for the numbers of steps completed in submitting separate expense reports or even the tiles placed in a game of tic tac toe, PageFlowScope will be more appropriate.

Tuesday, 2 August 2011

Task flows: Sayonara auto AM nesting in 11.1.2.0.0. Hello, ah, let's call it Bruce.

In my post from last week I documented the changing behaviour of task flows and Application Module nesting between the 11.1.2.0.0 and 11.1.1.X.0 series of ADF & JDeveloper. In that post I detected a distinct change in the underlying behaviour of how ADF works with ADF BC Application Modules with certain task flow options and I was concerned this would destroy the scalability of our applications. To understand those concerns and the rest of this post you need to read that post to comprehend where I was coming from.

One of the fortunate things of being apart of the Oracle ACE Director program is behind the scenes, we and Oracle staff are often chatting and helping each other out, which I'm incredibly appreciative of. In this case I must raise my hat to Steven Davelaar and John Stegeman for their out of hours assistance in one form or another.

Regarding my last post, John made a reasonable point that I was making/drawing my conclusions far too early, that in fact I needed to test the complete task flow transaction life cycle to see if the behaviour of the task flows has changed in 11.1.2.0.0. In particular John's good memory led him back to this OTN forum post by Steve Muench that stated:
In fact, in a future release we will likely be changing that implementation detail so that the AMs are always used from their own AM pool, however they will share a transaction/connection.
Steve's point from late 2010, and the one that John was re-affirming is that even though the underlying implementation may change, the end effect from the Bounded Task Flow (BTF) programmer's point of view, everything can still work the same. And this is what I needed to check, looking at what AM methods are called is not enough. I needed to check the actual database connections and transaction behaviour.

From my post I was concerned without auto AM nesting, a page comprised of several BTFs with separate AMs would spawn as many database connections compromising the scalability of the application. From the logs I thought this was the case as I could see two root AMs created and a separate (ie. two) call to prepareSession() for each. My assumption being this meant two connections were being raised with the database under 11.1.2.0.0, where alternatively using 11.1.1.X.0 is was only one database connection using the auto AM nesting feature.

However a query on the v$session table in the database using the 11.1.2.0.0 solution:

SELECT * FROM v$session WHERE username = 'HR';

...showed only 1 connection. So regardless that there is 2 roots AMs instantiated under 11.1.2.0.0, they share connections (and therefore transactions too). In other words, while the end result is the same, the underlying implementation has changed.

I don't have a snazzy name for this new implementation vs the older auto AM nesting, so I figure we should call it Bruce to keep it simple (with apologies to Monty Python).

The only discrepancies between implementations we can see is that the prepareSession() and similar AM methods that deal with the connection or transaction (eg. afterConnect(), afterCommit()) are now called on the secondary AM as it's treated as a root AM rather than a nested AM. This was not the behaviour under 11.1.1.X.0 as nested AMs delegate that responsibility back to the root AM. This in turn may cause you a minor hiccup if you've overridden these method in a framework extension of the AppModuleImpl as they'll now be called across all your AMs, including those who used to be auto nested.

In returning to Steve Muench's point:
In fact, in a future release we will likely be changing that implementation detail so that the AMs are always used from their own AM pool, however they will share a transaction/connection.
Via http://localhost:7101/dms/Spy I've verified this is the case with Bruce, where under 11.1.1.X.0 is used to be a single AM pool, but now under 11.1.2.0.0 there is 2 AM pools & 1 defined connection. The end effect and my primary concern from the previous blog post is now mute, the scalability of database connections is maintained. Bruce is a winner.

The interesting change under 11.1.2.0.0 is the 1 AM pool vs many AM pools. Ignoring if at design time create nested AMs under the 1 root AM, traditionally with the runtime auto nesting AM feature you'd still have 1 root AM pool. Now for each root AM you'll end up with an associated AM pool. If you're app is made up of hundreds of AMs that were nested and used the AM pool of their parent, you'll now end up with hundreds of AM pools. Whether this is a problem is hard to tell without load testing an application with this setup, but Steve Muench does comment in a follow up to the OTN forum post "The AM pool in an of itself is not additional overhead, no."

So potentially the midtier is less scalable as it needs to maintain more pools and process them, to what degree we don't know. Yet, more importantly, we now have a relatively more flexible solution in terms of tuning the AM pools, because previously
it was an all or nothing affair with 1 AM pool under the auto AM nesting 11.1.1.X.0 approach (again ignoring design time nesting AMs under the root AM), and now with Bruce we've lots of find grained AM pools to tune. As such each pool can be tuned where an AM pool for a little used BTF can be set to consume less resources over a BTF with an AM pool that is hit frequently.

So, besides a couple of minor implementation changes (and if you find more please post a comment), it looks like Bruce is a winner.

Again thanks must go to John Stegeman for his assistance in working through his issues, and Steven Davelaar for his offer of support.

Friday, 29 July 2011

Task Flows: Sayonara automated nesting of Application Modules JDev 11.1.2.0.0?

-- Post edit --

Any readers of this post should also read the following follow-up post.

-- End post edit --

In a previous blog post I discussed the concept of automated nesting of Application Modules (AMs) when using Bounded Task Flows (BTFs) with a combination of the transactional options Always Begin New Transaction, Always Use Existing Transaction and Use Existing Transaction if possible. The automated nesting of AMs is a very important feature as when you have a page made up of disparate regions containing task flows, and those regions have their own AMs, without the auto-nesting feature you end up with the page creating as many connections as there are independent region-AMs. Thus your application is less scalable, and architectural your application must be built in a different manner to avoid this issue in the first place.

This automated nesting of AMs is exhibited in the 11.1.1.X.0 series of JDeveloper and the ADF framework including JDev 11.1.1.4.0 & 11.1.1.5.0. Unfortunately, either by error or design this feature is gone in 11.1.2.0.0. Having checked the JDev 11.1.2.0.0 release notes and what's new notes, I can't see any mention of this change.

In turn I don't believe (please correct me if I'm wrong) there to be a section in the Fusion Guide that specifically talks about the interactions of the task flow transaction options and Application Module creation. The documentation talks about one or the other, not both in combination. This is somewhat of a frustrating documentation omission to me because it means Oracle can change the behaviour without being held accountable to any documentation stating how it was meant to work in the first place. All I have is a number of separate posts and discussions with Oracle Product Managers describing the behaviour which cannot be considered official.

In the rest of this post I'll demonstrate the changing behaviour between versions. It would be appreciated if any readers find errors in both the code, or even factual errors that you follow up with a comment on this blog please. I'm always wary of misleading others, I write my blog to inform and educate, not lead people down the garden path.

4 test applications

For this post you can download a single zip containing 4 different test applications to demonstrate the changing behaviour:

a) ByeByeAutoAMNestingJSPX111140
b) ByeByeAutoAMNestingJSPX111150
c) ByeByeAutoAMNestingJSPX111200
d) ByeByeAutoAMNestingFacelets111200

Why so many versions? My current client site is using 11.1.1.4.0, not 11.1.1.5.0, so I wanted to check there was consistent behaviour in the pre-11.1.2.0.0 releases. For this blog post I'll talk about the 11.1.1.5.0 version, but the exactly same behaviour is demonstrated under 11.1.1.4.0.

In addition I know that in the 11.1.2.0.0 release because of the support for JSPX & Facelets, that the controllers have different implementations, so it is necessary to see if the issue is different between the two VDL implementations.

Besides the support for 4 different versions of JDev, and in the 11.1.2.0.0 the different VDLs, each application is constructed in exactly the same fashion, using a near identical Model and ViewController setup. The following sections describe what has been setup in both these projects across all the example applications.

The Model project

Each application has exactly the same ADF BC setup connecting to Oracle's standard HR schema. The Model project in each application includes EOs and VOs that map to the employees and locaitons tables in the HR schema. The tables and the data they store are not consequential to this post, we simply need some Entity Objects (EOs) and View Objects (VOs) to expose through our Application Modules (AMs) to describe the AM nesting behaviour.

In the diagram above you can see the EOs and VOs. In addition I've defined 2 root level AMs EmployeesAppModule and LocationsAppModule. The EmployeesAppModule exposes the EmployeesView VO and the LocationsAppModule exposes the LocationsView.

To be clear, note I've defined these as separate root level AMs. So at the ADF BC level there is no nesting of the AMs defined. What we'll attempt to do is show the automatic nesting of AMs at the task flow level, or not, as the case might be.

In order to comprehend if the automated AM nesting is working at runtime, it's useful to add some logging to the ADF Business Components to show us such things as:

1) When our Application Modules are being created
2) If the AMs are created as root AMs or nested AMs

As such in each of the AMs we'll include the following logging code. The following example shows the EmployeesAppModuleImpl changes. Exactly the same would be written into the LocationsAppModuleImpl, with the exception of changing the log messages:
public class EmployeesAppModuleImpl extends ApplicationModuleImpl {
// Other generated methods

public static ADFLogger logger = ADFLogger.createADFLogger(EmployeesAppModuleImpl.class);

@Override
protected void create() {
super.create();
if (isRoot())
logger.info("EmployeesAppModuleImpl created as ROOT AM");
else
logger.info("EmployeesAppModuleImpl created as NESTED AM under " + this.getRootApplicationModule().getName());
}

@Override
protected void prepareSession(Session session) {
super.prepareSession(session);
if (isRoot())
logger.info("EmployeesAppModuleImpl prepareSession() called as ROOT AM");
else
logger.info("EmployeesAppModuleImpl prepareSession() called as NESTED AM under " + this.getRootApplicationModule().getName());
}
}
View Controller project

Each application has a near identical ViewController project with the same combination of task flows, pages & fragments. The only exception being the 11.1.2.0.0 applications, where the Facelets application doesn't use JSPX pages or fragments, but rather Facelets equivalents. This section describes the commonalities across all applications.

Each application essentially is made up of 3 parts:

a) A Start page
b) A Bounded Task Flow (BTF) named ParentTaskFlow comprised of a single page ParentPage
c) A BTF named ChildTaskFlow comprised of a single fragment ChildFragment

Start Page

1) The start page is designed to call the ParentTaskFlow through a task flow call.

ParentTaskFlow

1) The ParentTaskFlow is set to Always Begin New Transaction and Isolated data control scope.

2) The ParentTaskFlow page contains an af:table showing data from the EmployeesView of the EmployeesAppModuleDataControl.

3) The ParentTaskFlow page also contains a region that embeds the ChildTaskFlow

ChildTaskFlow

1) The ChildTaskFlow is set to Use Existing Transaction if Possible and Shared data control scope

2) The ChildFragment fragment contains an af:table showing data from the LocationsView of the LocationsAppModuleDataControl

The behaviour under 11.1.1.5.0

When we run our 11.1.1.5.0 application, and navigate from the Start Page to the ParentTaskFlow BTF showing the ParentPage, in the browser we see a page showing data from both the Employees VO and Locations VO. Of more interest this is what we see in the logs:
<EmployeesAppModuleImpl> <create> EmployeesAppModuleImpl created as ROOT AM
<EmployeesAppModuleImpl> <prepareSession> EmployeesAppModuleImpl prepareSession() called as ROOT AM
<EmployeesAppModuleImpl> <create> EmployeesAppModuleImpl created as NESTED AM under EmployeesAppModule
<LocationsAppModuleImpl> <create> LocationsAppModuleImpl created as NESTED AM under EmployeesAppModule
Based on my previous blog post on investigating and explaining the automated Application Module nesting feature in the 11.1.1.X.0 JDeveloper series, this is what I believe is occurring.

As the ParentTaskFlow is designed to start a new transaction, when the first binding in the page exercises the EmployeesAppModule via the associated Fata Control and View Object embedded in the table, ADF instantiates the AM as the root AM and attaches it to the Data Control Frame.

The Data Control Frame exists for chained BTFs who are joining transactions. So in this example the Employees AM is the first AM to join the Data Control Frame and it becomes the root AM. A little oddly we see the EmployeesAppModuleImpl then created again and nested under a root instance of itself. I'm not really sure why this occurs, but it might just be some sort of algorithmic consistency required for the Data Control Frame. Maybe readers might have something to share on this point?

It's worth noting the significance of a root AM unlike a nested AM, is only the root AM connects to the database and manages the transactions through commits and rollbacks. Nested AMs delegate these responsibilities back to the root AM. This is why we can see the prepareSession() call for the EmployeesAppModule.

When the page processing gets to the bindings associated with the ChildTaskFlow, within the fragment of the ChildTaskFlow it discovers the LocationsAppModule via the associated Data Control and View Object embedded in the table. Now we must remember that the ChildTaskFlow has the Use Existing Transaction if Possible and Shared data control scope options set. From the Fusion Guide this transaction option says:

"Use Existing Transaction if possible - When called, the bounded task flow either participates in an existing transaction if one exists, or starts a new transaction upon entry of the bounded task flow if one doesn't exist."

In order for the BTF to be part of the same transaction, it must use the same database connection too. As such regardless in the ADF Business Components where we defined the two separate Application Modules as root AMs (which by definition implies they have separate transactions & database connections), it's expected the task flow transaction options overrides this and forces the second AM to "nest" AM under the first AM as it wants to "participate in the existing transaction if it exists."

So to be clear, this is the exact behaviour we see in the logs of our 11.1.1.X.0 JDeveloper series of applications. The end result is our application takes 1 connection out with the database rather than 2.

The behaviour under 11.1.2.0.0

Under JDeveloper 11.1.1.2.0.0 regardless if we run the JSPX or Facelets application, with exactly the same combination of task flow elements and transaction options, this is what we see in the log:
<EmployeesAppModuleImpl> <create> EmployeesAppModuleImpl created as ROOT AM
<EmployeesAppModuleImpl> <prepareSession> EmployeesAppModuleImpl prepareSession() called as ROOT AM
<LocationsAppModuleImpl> <create> LocationsAppModuleImpl created as ROOT AM
<LocationsAppModuleImpl> <prepareSession> LocationsAppModuleImpl prepareSession() called as ROOT AM
As such regardless that the Oracle task flow documentation for the latest release says that the second task flow with the Use Existing Transaction if Possible option should join the transaction of the calling BTF, it doesn't. As can see from the logs both AMs are now treated as root and are preparing their own session/connection with the database.

The effect of this is our application now uses 2 connections rather than 1, and in turn the BTF transaction options don't appear to be working as prescribed.

Is it just a case of philosophy?

Maybe this is just a case of philosophy? Prior to JDev 11.1.2.0.0 the ADFc controller (who implements the task flows) was the winner in how the underlying ADF BC Application Modules were created and nested. Maybe in 11.1.2.0.0 Oracle has decided that no, in fact the ADFm model layer should control its own destiny?

Who knows – I don't see any documentation in the release notes or what's new notes to tell me this has changed. The task flow transaction option documentation is all I have. As such if you've relied on this feature, your architecture is built on this feature as we have, we're now in a position we can't upgrade our application to 11.1.2.0.0 without major rework.

To get clarification from Oracle I'll lodge an SR and will keep the blog up to date on any information discovered.

-- Post edit --

Any readers of this post should also read the following follow-up post.

-- End post edit --

Tuesday, 17 May 2011

JDev 11g, Task Flows & ADF BC – one root Application Module to rule them all?

JDev 11.1.1.5.0

In my previous blog post I discussed the power of the ADF task flow functionality, and the devil in the detail for uninitiated developers using the transaction and data control scope options. This post will extend the discussion on task flows and the ADF Controller's interaction with ADF Business Components, in order to show another scenario where programmers must understand the underlying behaviour of the framework.

Developers who have worked with the ADF framework for some time, especially from the JDeveloper 10g edition and earlier, will likely have stumbled across the concepts of root and nested Application Modules (AM) in the ADF Business Component (ADF BC) layer. The Fusion Guide has the following to say on the two types of AMs:
Application modules support the ability to create software components that mimic the modularity of your use cases, for which your higher-level functions might reuse a "subfunction" that is common to several business work flows. You can implement this modularity by defining composite application modules that you assemble using instances of other application modules. This task is referred to as "application module nesting". That is, an application module can contain (logically) one or more other application modules, as well as view objects. The outermost containing application module is referred to as the "root application module".
At runtime, your application works with a "main" — or what's known as a "root "— application module. Any application module can be used as a root application module; however, in practice the application modules that are used as root application modules are the ones that map to more complex end-user use cases, assuming you're not just building a straightforward CRUD application. When a root application module contains other nested application modules, they all participate in the root application module's transaction and share the same database connection and a single set of entity caches. This sharing is handled for you automatically by the root application module and its "Transaction" object.
The inference is that if for a single user you want to support more than one transaction at a time, you must create 2 or more root AMs. However the implication of this is if you do so, as a transaction and database connection have a 1 to 1 relationship, a user exercising more than one root AM in the ADF BC layer during their session will take out the same amount of connections with the database. This implication further effects scalability, as the more connections a user takes out from the database, and more connections in our midtier pool will be tied up by each user too.

Task Flow transaction options

The transaction and data control scope behavioural options available to bounded task flows provide a sophisticated set of functionality for spawning and managing one or more transactions during an ADF user's session, an extension of the facilities provided by the ADF BC Application Module. Straight from the Fusion Developer's Guide the task flow transaction options are:

• <No Controller Transaction>: The called bounded task flow does not participate in any transaction management.

• Always Use Existing Transaction: When called, the bounded task flow participates in an existing transaction already in progress.

• Use Existing Transaction If Possible: When called, the bounded task flow either participates in an existing transaction if one exists, or starts a new transaction upon entry of the bounded task flow if one doesn't exist.

• Always Begin New Transaction: A new transaction starts when the bounded task flow is entered, regardless of whether or not a transaction is in progress. The new transaction completes when the bounded task flow exits.

Ignoring the "No Controller Transaction" option which defaults back to the letting the ADF BC layer manage its own transactions, the other task flow transaction options allow the developer to create and reuse transactions at a higher level of abstraction for relating web pages, page fragments and other task flow activities. As such the root AM doesn't constrain when the application establishes a new transaction and connection to the database.

Yet if now we've the option for spawning more transactions in our application, what's the implication on scalability and the number of connections that are taken out in the midtier and database for each user? Thanks to a previous OTN forum post and the kind assistance of Steve Muench this post will demonstrate how the ADF framework as a whole attempts to minimize this issue.

The "No Controller Transaction" option

Before investigating how the ADF framework addresses the scalability issue, it's useful to run through an example of using the "No Controller Transaction" bounded task flow option. This will demonstrate how the framework establishes database connections when defaulting back to the ADF BC layer's own transaction and database connection functionality.

As explained by Frank Nimphius in the following OTN Forum post the No Controller Transaction option infers that task flows are taking no control of the transactions established during the user's session when the task flow is called, all such functionality is delegated back to the underlying business services. In context of this blog post the business service layer is ADF Business Components.

As seen in the following picture overall the ADF BC layer for our example is application is a fairly simple one:


As can be seen the application is comprised of two ADF BC Application Modules (AM), named Root1AppModule & Root2AppModule. Both expose the same View Object (VO) OrganisationsView as two separate usages. In the following picture you see Root1AppModule exposes OrganisationsView as OrganisationsView1:


...and in the following picture Root2AppModule exposes OrganisationsView2 off the same OrganisationsView VO:


For what it's worth to the discussion, OrganisationsView is based on the same Organisations EO:


However as there are 2 root AMs at play in our example application, each AM will instantiate its own OrganisationsView (namely OrganisationsView1 and OrganisationsView2 respectively) and Organisation EO cache, implying the record sets in the midtier are distinctly different even though we're using the same underlying design time constructs.

Like that explained in the previous blog post, how do we know when an AM actually creates a connection? Without knowing this, in our trials with the transaction options supported by Bounded Task Flows, unless the ADFc explicitly throws an error, we'll have trouble discerning what the ADF BC layer is actually doing underneath the task flow transaction options.

While external tools like the Fusion Middleware Control will give you a good insight into this, the easiest mechanism is to extend each root Application Module's ApplicationModuleImpl's class with our own AppModuleImpl and override the create() and prepareSession() methods. The following code shows an example for the Root1AppModuleImpl:
public class Root1AppModuleImpl extends ApplicationModuleImpl {
// Other generated methods

@Override
protected void create() {
super.create();
if (isRoot())
System.out.println("########Root1AppModuleImpl.create() called. AM isRoot() = true");
else
System.out.println("########Root1AppModuleImpl.create() called. AM isRoot() = false");
}

@Override
protected void prepareSession(Session session) {
super.prepareSession(session);
if (isRoot())
System.out.println("########Root1AppModuleImpl.prepareSession() called. AM isRoot() = true");
else
System.out.println("########Root1AppModuleImpl.prepareSession() called. AM isRoot() = false");
}
}
Pretty much the code for the Root2AppModuleImpl is the same, except the name of the classes change in the System.out.println calls.

Overriding the create() method allows us to see when the Application Module is not just instantiated, but ready to be used. This doesn't tell us when a transaction and connection is established with the database, but, is useful in identifying situations where the framework creates a root or nested AM.

The prepareSession() method is a chokepoint method the framework uses to set database session state when a connection is established with the database. As such overriding this method allows us to see when the AM does establish a new connection and transaction.

Once we've setup our ADF BC Model project, we'll now create two Bounded Task Flows (BTFs) to allow the user to work with the underlying AMs and VOs.

Both our task flows are comprised of only 2 activities, a page to view the underlying the Organisations data and an Exit Task Flow Activity. Root1AppModuleTaskFlow contains the following activities:


...and Root2AppModuleTaskFlow is virtually identical:


The OrganisationsTable1.jspx and OrganisationsTable2.jspx pages in each task flow show an editable table of Organisations data, visually there's no difference between them:


While the pages don't differ visually, underneath their pageDef files are completely different as they source their data from different root Application Modules and View Objects. In the following picture we can see the OrganisationsTable1PageDef.xml file makes use of the ADF BC OrganisationsView1 VO mapping to the Root1AppModuleDataControl:


And in the following picture we can see the OrganisationsTable2PageDef.xml file uses the ADF BC OrganisationsView2 VO mapping to the Root2AppModuleDataControl:


Finally we can see the No Controller Transaction option set for the first task flow:


..and the second:


Note I've deliberately set the data control scope to Shared to highlight a point in the future.

At this point we'll include a Start.jspx page in our Unbounded Task Flow (UTF) adfc-config.xml file with navigation rules to call each BTF, and navigation rules to return on the Exist Task Flow Return activity being called inside of each BTF:


On running our application starting with the Start.jspx page we see:


At this point inspecting the console output for the application in the JDeveloper log window, we don't see the System.out.println messages:


Returning to the Start page and selecting the Go Root1AppModuleTaskFlow button we then see:


Note in the above picture I've deliberately selected the 5th recorded and changed the Organisation's name to uppercase. Checking the log window we now see:


As such we can see that the Root1AppModule in the ADF BC layer has been instantiated, and has established a connection with the database, also establishing a transaction.

If we now return via the Exit button to the Start.jspx page in the UTF, then select the Go Root2AppModuleTaskFlow button we see:


Note that the 5th record hasn't been updated to capitals, and indeed the 1st record is the default selected record, not the 5th, implying that we're not seeing the same cache of data from the midtier, essentially 2 separate transactions with 2 separate Organisations EO caches, and in turn two separate VOs with their own current row record indicators. This last point shows that the Shared data control scope we set in the task flows has no use when the No Controller Transaction option is used.

Returning to the JDev log window we see that the second Root2AppModule was instantiated and has separately established its own connection and therefore transaction:


For completeness if we select the Exit button and return to the Start.jspx page, then return to the first task flow we can see the 5th record is still selected and updated:


The key point from this example is the fact that each Application Module is instantiated as a root AM and also creates its own connection.

A chained "No Controller Transaction" task flow example

With the previous example in hand, and building upto our final example, in this section I'd like to rearrange our example application such that rather than calling the two task flows separately, we'll instead chain them together, the 1st calling the 2nd.

The substantial change to the setup is that of the Root1AppModuleTaskFlow, where we now include a Task Flow Call to the Root2AppModuleTaskFlow, and a small modification to the OrganisationsTable1.jspx page to include a button to navigate to the Root2AppModuleTaskFlow:


In running the application, we first land in the Start.jspx page:


At this stage the log window shows no root AMs have been created:


On selecting the Go Root1AppModuleTaskFlow button we arrive at the OrganisationsTable1.jspx page:


The log window shows the Root1AppModule has been instantiated and a new connection/transaction created:


This time we'll select and change the 6th record to all caps:


Now we'll select the Go Root2AppModuleTaskFlow button that's located in the OrganisationsTable1.jspx page. On navigating to the new page we see:


As can be seen, the current row indicator lies on the first row and the 6th row hasn't been modified, exactly the same as the last example. In addition in the JDev log window we can see the establishment of the new root AM and connection/transaction:


The conclusion at this point of the blog post is chaining task flows when the No Controller Transaction options are used has no effect on the root AMs and the establishment of connections/transaction.

Always Begin New Transaction and Always Use Existing transaction example

With the previous 2 examples, we can see with the No Controller Transaction option in use for our bounded task flows, the design of our root Application Modules definitely has an effect on number of transactions and therefore connections taken out from the database.

With the following third and final example, we'll introduce BTF transaction options that relegate the transactional behaviour of the underlying ADF BC business services to the task flows themselves.

Taking our original Root1AppModuleTaskFlow, we'll now change its transaction options to use Always Begin New Transaction and an Isolated data control scope:


For our second task flow Root2AppModuleTaskFlow, which we intend to call from Root1AppModuleTaskFlow, we'll change its transaction options to use Always Use Existing Transaction. The IDE in this case enforces a Shared data control scope:


From the task flow transaction options described at the beginning of this post via the Fusion Dev Guide, the options we've selected here should mean the Root1AppModuleTaskFlow establishes the transaction and therefore connection, and the second Root2AppModuleTaskFlow should borrow/attach itself to the same transaction and connection. However from our previous examples with the No Controller Transaction option, surely the root Application Modules will enforce they both create their own transaction/connections?

Running this application we first hit out Start.jspx page:


And our log window shows no AMs have been created or established connections at this stage:


On navigating to Root1AppModuleTaskFlow via the associated button in the Start.jspx page we see very similar to previously:


Yet our log window shows something interesting:


We can see that the Root1AppModule has been created as a root AM, then it's established a connection via the prepareSession() method. Oddly however we can see a further Root1AppModule has been created? Even more oddly this secondary AM is not a root AM, but a nested AM? By deduction as there are no other log entries, this second instance of Root1AppModule must be nested under the root Root1AppModule? Interesting, but let's continue with the example.

Now that we've entered the Root1AppModuleTaskFlow, let's modify the 7th record:


Followed by selecting the button to navigate to the Root2AppModuleTaskFlow, we then see on visiting the OrganisationsTable2.jspx page of the second task flow:


Note the 7th record's data, and where the current row indicator is located! Before drawing conclusions, let's look at the log window's output:


From the log window we can see a 3rd AM has been instantiated, this time a Root2AppModule as a nested AM.

What's going on?

Without a key bit of information from Steve Muench's reply to my previous OTN Forum's post, it may be hard to determine the BTF behaviour in the last scenario. In context of ADF BC and BTFs given the right combination of transaction and data control scope options, the framework will automatically nest your AMs regardless if they're a root AM.

So when we're running our application, on calling the first task flow, we see:

a) A single root AM created based on Root1AppModule, as the framework needs at least a signle root AM to drive the transactions and connections. Let's refer to this as Root1AppModule1.

b) A second instance of Root1AppModule as a nested AM under its own root instance. Let's refer to this as Root1AppModule2. This is a little odd, but as the framework automatically nests the AMs of the called BTF under a root AM instance, it's using the first AM encountered for dual purposes, essentially nesting Root1AppModule2 under Root1AppModule1.

c) By the time we hit our second BTF, only one instance of Root2AppModule is created, nested under Root1AppModule1. Let's refer to this as Root2AppModule1.

To summarize at this point, our AM structure at runtime is:

(Root) Root1AppModule1
(Nested Level 1) Root1AppModule2
(Nested Level 1) Root2AppModule1

Given that we now know nesting is occurring, it does explain why we're seeing the changes to the 7th record in both BTF OrganisationTable pages. Only if separate root AMs were used would there be separate EO caches for the Organisations table. However as the BTFs have nested the AMs, they're essentially sharing the same EO cache. So a change to data in one BTF will be reflected in the second, as long as they share transactions and are ultimately based on the same EO.

The final mystery is why if we can see the data changes are reflected in each BTF, how come the current row indicators of the underlying VOs are out of sync? This is simply explained by the fact that with the two different VOs defined in each AM, namely OrganisationsView1 & OrganisationsView2, at runtime essentially they are both instantiated separately, even though ultimately they exist under the same Root1AppModule1 at runtime and share the same EO cache. As such both VOs maintain their own separate row currency, and other state carried by the ADF BC View Objects.

Changing AM connection configurations

At this point we may start to ask how the framework is deciding to nest the AMs under the one parent AM? Is the underlying framework being smart enough to look at the database connections for the separately defined root AMs, and in determining they're one and the same, nesting the AMs as it doesn't matter when they both connect to the same database?

From my testing it appears to make no difference if you change the connections. It's essentially the first AM whose connection details are used, and it's just assumed the second AM uses the same connection details. The further inference of this is it's just assumed that all the database objects required by the second root AM database connection are available to the first.

This has two knock on effects:

1) If both AMs did connect to the same database schema, if the first connection is missing privileges on objects required by the second AM's objects, you'll hit database privilege errors during runtime when those second AM objects interact with the database (e.q. at query and DML time).

2) If you actually use your root AMs to connect to entirely different data sources, such as different databases, this automatic nesting will cause your application to fail.

The first here is easily solved with some database privilege fixes. For the second, this implies you should move back to the No Controller Transaction options to return to an application where the root AMs use their own connections.

Some caveats

If the automatic nesting functionality proves useful to you, be warned of one caveat. From the OTN Forum replies from Steve Muench, even though he describes the fact the framework will automatically nest your AMs for the purposes of connections/transactions, don't assume this means other AM features are joined or always will display the same behaviour. To quote:
In fact, in a future release we will likely be changing that implementation detail so that the AMs are always used from their own AM pool, however they will share a transaction/connection.
A second caveat is around the mixing of BTF transaction options. Notably the No Controller Transaction option because of its inherit different use of root AMs to that of the normal transaction options would imply that mixing No Controller Transaction with other BTFs not using this option could leave to some disturbing and unpredictable behaviour around transactions and connections. Either use No Controller Transaction exclusively, or not at all.

A third and final caveat is this and the previous post that describe the transaction behaviours is in context of the interaction with ADF Business Components. Readers should not assume that the same transaction behaviour will be exhibited by different underlying business services such as EJBs, POJOs or Web Services. As example Web Services don't have the concept of transactions, so we can probably guess that there's no point using anything but the No Controller Transaction option .... however again you need to experiment with these alternatives yourself, don't base your conclusions on this post.

The implication for ADF Libraries

In Andrejus Baranovski's blog post he shows a technique for mashing the Model projects of several ADF Libraries containing BTFs and their own AMs into a single Model project, such that a single root AM is defined across all BTFs at design time, to be reused by all ADF Libraries. With this blog post we see we while Andrejus's technique is fine for those relying on the No Controller Transaction options of BTFs, such an extended build of ADF Libraries and their AM Model projects is not essential to minimize the number of connections of our application.

Feedback

This blog post and the previous have taken a lot of research and testing to arrive at it's conclusions. If future readers find anything wrong, or find alternative scenarios where what's written here doesn't pan out, it would be greatly appreciated if you could leave a comment to that effect, such that this post doesn't mislead future readers.

JDev 11g, Task Flows & ADF BC – the Always use Existing Transaction option – it's not what it seems

JDev 11.1.1.5.0

Oracle's JDeveloper 11g introduces the powerful concept of task flows to the Application Development Framework (ADF). Task flows enable "Service Oriented Development" (akin to "Service Oriented Architecture") allowing developers to align web application development closely to the concept of business processes, rather than a disparate set of web pages strung loosely together by URLs.

Yet as the old saying goes, "with power comes great responsibility", or alternatively, "the devil is in the detail". Developers need to have a good grasp of the task flow capabilities and options in order to not paint themselves into a corner. This is particularly true of the transaction and data control scope behavioural options provided by "bounded" task flows.

The transaction and data control scope behavioural options available to bounded task flows provide a sophisticated set of functionality for spawning and managing one or more transactions during an ADF user's session. Straight from the Fusion Developer's Guide the transaction options are:

• <No Controller Transaction>: The called bounded task flow does not participate in any transaction management.

• Always Use Existing Transaction: When called, the bounded task flow participates in an existing transaction already in progress.

• Use Existing Transaction If Possible: When called, the bounded task flow either participates in an existing transaction if one exists, or starts a new transaction upon entry of the bounded task flow if one doesn't exist.

• Always Begin New Transaction: A new transaction starts when the bounded task flow is entered, regardless of whether or not a transaction is in progress. The new transaction completes when the bounded task flow exits.

In recently discussing the task flow transaction options on the OTN Forums (with the kind assistance of Frank Nimphius it's become apparent that the transaction options described in the Fusion Guide are written from the limited perspective of the ADF controller (ADFc). Why a limited perspective? Because the documentation doesn't consider how these transactions options are dealt with by the underlying business services layer – the controller makes no assumptions about the underlying layers, it is deliberate an abstraction that sits on top. As such if we consider ADF Business Components (ADF BC), ADFc can interpret the task flow transaction options as it sees fit. The inference being, ADF BC can introduce subtle nuances in how the transaction options work as called by the controller.

The vanilla "Always Use Existing Transaction" option

The Fusion Guide is clear in the use of the task flow "Always Use Existing Transaction" option:

• Always Use Existing Transaction: When called, the bounded task flow participates in an existing transaction already in progress.

The inference here is that the task flow won't create its own transaction, but rather will attach itself to an existing transaction established by its calling task flow (let's refer to this as the "parent" task flow), or a "grandparent" task flow somewhere up the task flow call stack.

To test this let's demonstrate how ADFc enforces this option.

In our first example application we have an extremely simple ADF BC model of a single Entity Object (EO), single View Object (VO) and Application Module (AM), serving data from a table of Organisations in my local database:


Oracle's JDeveloper 11g introduces the powerful concept of task flows to the Application Development Framework (ADF). Task flows enable "Service Oriented Development" (akin to "Service Oriented Architecture") allowing developers to align web application development closely to the concept of business processes, rather than a disparate set of web pages strung loosely together by URLs.

From the ViewController side we have a single Bounded Task Flow (BTF) OrgTaskFlow1 comprised of a single page:


....where the single page displays a table of Organisations via the underlying ADF Business Components:


...and the transaction options of the BTF are set to Always Use Existing Transaction. By default the framework enforces the data control scope must be shared:


In order to call the BTF, from our Unbounded Task Flow (UTF) configured in the adfc-config.xml file, we have a simple Start.jspx page, which via a button invokes a Task Flow Call to the BTF OrgTaskFlow1:


On starting the application, running the Start page, selecting the button to navigate to the Task Flow Call, we immediately hit the following error:
oracle.adf.controller.activity.ActivityLogicException: ADFC-00006: Existing transaction is required when calling task flow '/WEB-INF/OrgTaskFlow1.xml#OrgTaskFlow1'.

Via this error we can see ADFc is enforcing at runtime that the OrgTaskFlow1 BTF is unable to run as it requires its parent or grandparent task flow to have established a transaction on its behalf. With this enforcement we can (incorrectly?) conclude that Oracle's controller will never allow the BTF to run if a new transaction hasn't been established. However as you can probably guess, this post will demonstrate this isn't always the case.

A side note on transactions

Before showing how to create a transaction with the Always Use Existing Transaction option, a discussion on how we can identify transactions created via ADF BC is required.

Readers familiar with ADF Business Components will know that root Application Modules (AM) are responsible for the establishment of connections and transactional processing with the database. Ultimately the concept of transactions in context of the ADF Controller is that off the underlying business services, and by inference when ADF Business Components are used this means it's the root Application Modules that provide this functionality.

It should also be noted that by inference, that the concept of a transaction and a connection are the one in the same, in the idea that a connection with the database allows you to support a transaction, and if you have multiple transactions, you therefore have multiple connections. Simple you can't have one without the other.

Yet thanks to the Application Module providing the ability to create connections and transactions, how do we know when an AM actually creates a connection? Without knowing this, in our trials with the transaction options supported by Bounded Task Flows, unless the ADFc explicitly throws an error, we'll have trouble discerning what the ADF BC layer is actually doing underneath the task flow transaction options.

While external tools like the Fusion Middleware Control will give you a good insight into this, the easiest mechanism is to extend the Application Module's ApplicationModuleImpl's class with our AppModuleImpl and override the create() and prepareSession() methods:
public class AppModuleImpl extends ApplicationModuleImpl {
// Other generated methods

@Override
protected void create() {
super.create();
if (isRoot())
System.out.println("######## AppModuleImpl.create() called. AM isRoot() = true");
else
System.out.println("######## AppModuleImpl.create() called. AM isRoot() = false");
}

@Override
protected void prepareSession(Session session) {
super.prepareSession(session);
if (isRoot())
System.out.println("######## AppModuleImpl.prepareSession() called. AM isRoot() = true");
else
System.out.println("######## AppModuleImpl.prepareSession() called. AM isRoot() = false");
}
}
Overriding the create() method allows us to see when the Application Module is not just instantiated, but ready to be used. This doesn't tell us when a transaction and connection is established with the database, but, is useful in identifying situations where the framework creates a nested AM (which is useful for another discussion about task flows, stay tuned for another blog post).

The prepareSession() method is a chokepoint method the framework uses to set database session state when a connection is established with the database. As such overriding this method allows us to see when the AM does establish a new connection and transaction.

Bending the "Always Use Existing Transaction" option to create a transaction

Now that we have a mechanism for seeing when transactions are established, let's show a scenario where the Always Use Existing Transaction option does create a new transaction.

In our previous example our Unbounded Task Flow called our OrgTaskFlow1 Bounded Task Flow directly. This time let's introduce an intermediate Bounded Task Flow called the PregnantTaskFlow. As such our UTF Start page now calls the PregnantTaskFlow:


The PregnantTaskFlow will set its transaction option to Always Begin New Transaction and an Isolated data control scope:


By doing this we are setting up a scenario where the parent task flow will establish a transaction, which will be used by the OrgTaskFlow1 later on. Next within the PregnantTaskFlow we include a single page to land on called Pregnant.jspx, which includes a simple button to then navigate to the OrgTaskFlow1 task flow via a Task Flow Call in the PregnantTaskFlow itself:


The Pregnant.jspx page is only necessary as it gives a useful landing page when the task flow is called, to see what the task flow has done with transactions before we call the OrgTaskFlow1 BTF.

The transaction options of the OrgTaskFlow1 remain the same, Always Use Existing Transaction and a Shared data control scope:


With the moving parts of our application established, if we now run our application starting with the Start page:


...clicking on the button we arrive on the Pregnant.jspx page within the PregnantTaskFlow BTF:


(Oops, looks like this picture has been lost... I'll attempt to restore this picture soon)

Remembering our PregnantTaskFlow is responsible for establishing the transaction, and therefore we should see our Application Module create() and prepareSession() methods write out System.out.println messages to the console in the JDev log window:


Hmmm, interesting, the log window is bare, no sign of our messages? So our PregnantTaskFlow was set to create a new transaction, but no such transaction or connection with the database for that matter was established?

Here's the interesting point of our demonstration. If we then select the button in the Pregnant.jspx page which will navigate to the OrgTaskFlow1 task flow call activity in the PregnantTaskFlow, firstly we see in the browser our OrgList.jspx page:


According to our previous tests at the beginning of this post we may have expected the ADFC-00006 error "Existing transaction is required", but instead the page has rendered?

In addition if we look at our log window:


...we now see our System.out.println messages in the console, showing that the AM create() methods were called and a new connection was established to the database via the prepareSession() method being called too.

(Why are there 2 calls to create() for AppModuleImpl? The following blog post on root AM interaction with task flows explains all.)

The contradictory results here are, that even though we set the Always Use Existing Transaction option for the OrgTaskFlow1 BTF are expected the ADFC-00006 error, that it in fact OrgTaskFlow1 did establish a new transaction?

What's going on?

An easy but incorrect conclusion to make is this is an ADF bug. However if you think through how the ADF framework works with bindings to the underlying services layer, in our context ADF BC, this actually makes sense.

From the point of view of a task flow, there is no inherit and directly configured relationship between the task flow and the business services layer/ADF BC. As example there is no option in the task flow properties to say which Data Control mapping to an ADF BC Application Module the task flow will use. The only point in the framework where the ADF view and controller layers touch the ADF BC side is through the pageDef bindings files, which are used by distinct task flow activities (including pages and page fragments) within the task flow as we navigate through the task flow (i.e. not the task flow itself). As such until the task flow hits an activity that calls a binding indirectly calling the ADF BC Application Module via a Data Control, the task flow has no way of actually establishing the transaction.

That's why in the demonstrations above I referred to the intermediate task flow as the "pregnant" task flow. This task flow knows it wants to establish a transaction with the underlying business services Application Module through a binding layer Data Control, it's effectively pregnant waiting for such the event, but it can't do so until one of its children activities exercises a pageDef file with a call to the business service (to take the analogy too far, you're in labour expecting your first child, you've rushed to the hospital, but you're told you'll have to wait as the widwife hasn't arrived yet ... you know at this point you're going to have this damned kid, but you've got to desperately wait until the midwife arrives ;-)

By chance in our example, the first activity in the PregnantTaskFlow that does have a pageDef file is the OrgList.jspx page that resides in the OrgTaskFlow1 task flow called via a task flow call in the PregnantTaskFlow. So in the sense even though the OrgTaskFlow1 says it won't create a transaction, it in fact does.

Why does this matter?

At this point of the discussion you might think this all a very interesting discussion, but rather an academic exercise too. Logically there's still only one transaction established for the combination of the PregnantTaskFlow and OrgTaskFlow1, regardless of where the transaction is actually established. Why does it matter?

Recently on the ADF Enterprise Methodology Group I started a discussion on building task flow for reuse. Of specific interest I asked the question on what's the most flexible data control scope and transactions options to pick such that we don't limit the reusability of our task flows? If we set the wrong options such as Always Use Existing Transaction, because of errors like ADFC-00006, it may make the task flow unreusable, or at least limited in reuse to specific scenarios.

The initial conclusion from the ADF EMG post was only the Use Existing Transaction if Possible and Shared data control scope options should be used, as, this option will reuse an existing transaction if available from the calling task flow, or, establish a new transaction if one isn't available.

However from the conclusion of this post we can see the Always Use Existing Transaction option is in fact more flexible than first thought as long as we at some point wrap it in a task flow that starts a transaction, giving us another option when building reusable task flows.

Some caveats

A caveat also shared by the next blog post on task flow transaction, is both posts describe the transaction behaviours in context of the interaction with ADF Business Components. Readers should not assume that the same transaction behaviour will be exhibited by different underlying business services such as EJBs, POJOs or Web Services. As example Web Services don't have the concept of transactions, so we can probably guess that there's no point using anything but the No Controller Transaction option .... however again you need to experiment with these alternatives yourself, don't base your conclusions on this post.

Further reading

If you've got this far, I highly recommend you follow up reading this post by reading my next blog post on root Application Modules and how the transaction options of task flows change their behaviour.