Thursday, 8 December 2011

Apache Ivy and JDeveloper integration

As software applications grow, a common technique to reduce the complexity is to break the overall solution into separately built and deployed modules. This allows each component to be worked on independently without being overwhelmed with detail, though the cost of reassembling and building the application is the trade off for the added flexibility. When modules become reusable across applications the reassembly and build problem is exasperated and it becomes essential to track which version of each module is required against each application. Such problems can be reduced by the introduction of dependency management tools.

In the Java world there are a few well known tools for dependency management including Apache Ivy and Apache Maven. Strictly speaking Ivy is just a dependency management tool which integrates with Apache Ant, while Maven is a set of tools of where dependency management is but just one of its specialities.

In the ADF world thanks to the inclusion of ADF Libraries (aka. modules) that can be shared across applications, dependency management is also a relevant problem. Recently I went through the exercise of including Apache Ivy into our JDeveloper 11g and Hudson mix for an existing set of applications. This blog post attempts to describe the configuration of Apache Ivy in context of our JDeveloper setup in order to assist others setting up a similar installation. The blog post will introduce a simplistic application (downloadable from here) with 1 dependency to introduce the Ivy features, in very much an A-B-C style to assist the reader's learning.

Readers should be careful to note this post doesn't attempt to explain all the in's and out's of using Apache Ivy, just a successful configuration on our part. Readers are encouraged to seek out further resources to assist their learning of Apache Ivy.

Assumptions

This blog post assumes you understand the following concepts:

ADF Libraries
Resource palette
Apache Ant
ojdeploy

In the beginning there was... ah... ApplicationA

To start out with our overall application contains one JDeveloper application workspace known as ApplicationA, installed under C:/JDeveloper/mywork as follows:

ApplicationA initially has no dependencies and can be built and run standalone.

Within the application we create a separate project entitled "Build" with an Ant build scripts entitled "pre-ivy.build.xml" to build our application using ojdeploy as follows:
<?xml version="1.0" encoding="UTF-8" ?>
<project xmlns="antlib:org.apache.tools.ant" name="Build" basedir=".">
<property name="jdev.ojdeploy.path" value="C:\java\jdeveloper\JDev11gBuild6081\jdeveloper\jdev\bin\ojdeploy.exe"/>
<property name="jdev.ant.library" value="C:\java\jdeveloper\JDev11gBuild6081\jdeveloper\jdev\lib\ant-jdeveloper.jar"/>
<target name="Build">
<taskdef name="ojdeploy" classname="oracle.jdeveloper.deploy.ant.OJDeployAntTask" uri="oraclelib:OJDeployAntTask"
classpath="${jdev.ant.library}"/>
<ora:ojdeploy xmlns:ora="oraclelib:OJDeployAntTask" executable="${jdev.ojdeploy.path}"
ora:buildscript="C:\Temp\build.log" ora:statuslog="C:\Temp\status.log">
<ora:deploy>
<ora:parameter name="workspace" value="C:\JDeveloper\mywork\ApplicationA\ApplicationA.jws"/>
<ora:parameter name="profile" value="ApplicationA"/>
<ora:parameter name="outputfile" value="C:\JDeveloper\mywork\ApplicationA\deploy\ApplicationA"/>
</ora:deploy>
</ora:ojdeploy>
</target>
</project>
(Note the jdev.ojdeploy.path & jdev.ant.library properties that map to your JDeveloper installation. You will need to change these to suit your environment. This will need to be done for both ApplicationA and the following ADFLibrary1)

And then ApplicationA begat ADFLibrary1

Now we'll create a new task flow in a separate application workspace known as ADFLibrary1 which ApplicationA is dependent on:

We add an ADF Library JAR deployment profile to ADFLibrary1's ViewController project to generate ADFLibrary1.jar to:

C:\JDeveloper\mywork\ADFLibrary1\ViewController\deploy\adflibADFLibrary1.jar

Similar to ApplicationA we add a Build project to our application workspace and a pre-ivy.build.xml Ant build script using ojdeploy:
<?xml version="1.0" encoding="UTF-8" ?>
<project xmlns="antlib:org.apache.tools.ant" name="Build" basedir=".">
<property name="jdev.ojdeploy.path" value="C:\java\jdeveloper\JDev11gBuild6081\jdeveloper\jdev\bin\ojdeploy.exe"/>
<property name="jdev.ant.library" value="C:\java\jdeveloper\JDev11gBuild6081\jdeveloper\jdev\lib\ant-jdeveloper.jar"/>
<target name="Build">
<taskdef name="ojdeploy" classname="oracle.jdeveloper.deploy.ant.OJDeployAntTask" uri="oraclelib:OJDeployAntTask"
classpath="${jdev.ant.library}"/>
<ora:ojdeploy xmlns:ora="oraclelib:OJDeployAntTask" executable="${jdev.ojdeploy.path}"
ora:buildscript="C:\Temp\build.log" ora:statuslog="C:\Temp\status.log">
<ora:deploy>
<ora:parameter name="workspace" value="C:\JDeveloper\mywork\ADFLibrary1\ADFLibrary1.jws"/>
<ora:parameter name="project" value="ViewController"/>
<ora:parameter name="profile" value="ADFLibrary1"/>
<ora:parameter name="outputfile" value="C:\JDeveloper\mywork\ADFLibrary1\ViewController\deploy\ADFLibrary1"/>
</ora:deploy>
</ora:ojdeploy>
</target>
</project>
From here we want to attach ADFLibrary1.jar to ApplicationA's ViewController project. Overtime we might have many JARs we want to attach, so rather than mapping to several different deploy directories under each ADF Library application workspace, we'll assume the libraries are instead available under a central "lib" directory as follows:

Experienced readers will know to setup a Resource Palette "File Connection" to map to C:\JDeveloper\mywork\lib then simply add the JARs from the palette.

Adding Apache Ivy

At this point we have a rudimentary form of dependency management setup, where a logical version 1 of ApplicationA has attached a logical version 1 of ADFLibrary1 through the use of the ADF Library JAR being attached to ApplicationA's ViewController project. Note the word "rudimentary". Currently there is no way to track versions. If we have separate versions of ApplicationA dependent on separate versions of ADFLibrary1, developers have to be very careful to check out and build the correct versions, and there's nothing inherently obvious in the generated JAR file names to gives us an idea of what versions are being used.

Let's introduce Apache Ivy into the mix with this simplistic dependency model as a basis for learning, to see how Ivy solves the versioning dependency issue.

Adding ivy.xml to each module

Ivy requires that each module have an ivy.xml. The ivy.xml file among other things describes for each module:

a) The module name
b) The version of the module
c) Determines what artefacts the module publishes
d) Track the module's dependencies including the version of the dependencies

For our existing ADFLibrary1 we'll add an ivy.xml file to our Build project containing the following details:
<?xml version="1.0" encoding="UTF-8"?>
<ivy-module version="2.0">
<info organisation="sage" module="ADFLibrary1" revision="1"/>
<configurations>
<conf name="jar" description="Java archive"/>
<conf name="ear" description="Enterprise archive"/>
</configurations>
<publications>
<artifact name="ADFLibrary1" conf="jar" ext="jar"/>
</publications>
<!-- <dependencies> There are no dependencies for this module <dependencies/> -->
</ivy-module>
Of note:

a) The module name in the <info> tag
b) The revision/version number in the <info> tag
c) The publication of an ADF Library jar in the <publications> tag
d) And that this module is not dependent on any other modules through the commented out <dependencies> tag

(You might also note the <configurations> tag. Configurations define the type of artefacts we can possible generate for the module. In this case we're creating an ADF Library "JAR", but alternatively for example we could produce a WAR or EAR file or some other sort of artefact. For purposes of this blog post we'll keep this relatively simple and just stick to JARs and EARs).

For our existing ApplicationA its ivy.xml file under the Build project will look as follows:
<?xml version="1.0" encoding="UTF-8"?>
<ivy-module version="2.0">
<info organisation="sage" module="ApplicationA" revision="1"/>
<configurations>
<conf name="jar" description="Java archive"/>
<conf name="ear" description="Enterprise archive"/>
</configurations>
<publications>
<artifact name="ApplicationA" conf="ear" ext="ear"/>
</publications>
<dependencies>
<dependency org="sage" name="ADFLibrary1" rev="1">
<artifact name="ADFLibrary1" ext="jar"/>
</dependency>
</dependencies>
</ivy-module>
Of note:

a) The module name ApplicationA in the <info> tag
b) The revision/version number 1 in the <info> tag
c) The publication of an EAR in the <publications> tag
d) And of most importance, a dependency of ADFLibrary1, specifically release/version 1.

It's this last point that is most important as not only does it track the dependencies between modules (which truthfully JDev was already doing for us) but the ivy.xml file also tracks the version dependency, namely ApplicationA release/version 1 is dependent on version/release 1 of ADFLibrary1.

Apache Ivy Repository

In the previously described application configuration we were assuming the build of ApplicationA and ADFLibrary1 was all on the same developer machine. It's relatively simply for 1 developer to copy the JARs to the correct location to satisfy the dependencies. Yet in a typical development environment there will be multiple developers working on different modules across different developer machines. Moving JARs between developer PCs becomes problematic. We really need some sort of developer repository to share the modules archives.

At this point we introduce an Apache Ivy repository into our solution. Simplistically the Ivy repository is a location where developers can publish JARs to, and other developers when building an application with a dependency, can download the dependencies from.

Ivy supports different types of repositories which are documented under Resolvers in the Ivy documentation. For purposes of this blog post we'll use the simplest repository type of "FileSystem".

In order to make use of the FileSystem Ivy repository all developers must have access to a file (typically called) ivysettings.xml. This file defines for Ivy where the repository resides among other settings. How you distribute this file to developers is up to you, maybe it's located on a shared network location, maybe a copy checked out to a common local location. For purposes of this blog post we'll assume it resides on every developer's machine under C:\JDeveloper\ivy:

The following reveals the contents of a possible ivysettings.xml file:
<ivysettings>
<property name="ivy.repo.dir" value="C:\JDeveloper\ivy\repositories\development"/>
<resolvers>
<chain>
<filesystem name="repository">
<ivy pattern="${ivy.repo.dir}/[module]/ivy[module]_[revision].xml" />
<artifact pattern="${ivy.repo.dir}/[module]/[type][artifact]_[revision].[ext]"/>
</filesystem>
</chain>
</resolvers>
</ivysettings>
Points to consider:

1) Note the ivy.repo.dir property. Typically this would point to your own //SomeServer/YourRepositoryLocation which all developers can access on your local network. For the purposes of this blog post, in order to give readers a single zip file that they can download and use, I've changed this setting to instead locate the repository at C:\JDeveloper\ivy\repositories\development. This certainly *isn't* a good location for a shared repository, but one that is workable for demonstration purposes.

2) The <resolvers> <chain> defines the list of repositories for Ivy to publish to or download dependencies from. In this case we've only configured one repository, but there's nothing stopping you having a series of repositories.

3) The <ivy> subtag of the <filesystem> tag defines how Ivy will store and search for it's own metadata files in the repository, of which it stores information such as the module name, versions and more that is essentially copied from your ivy.xml files.

4) The <artifact> tag defines how Ivy will store and search for the actual artefacts you generate (such as the ADF Library JARs) in the repository.

With regards the last 2 points it's best to leave the patterns as the default, as in the end the repository can be treated as a black box. You don't really care how it works, just as long as Ivy allows you to publish and retrieve files from the repository.

Configuring Ant to *understand* Ivy

With the ivy.xml and ivysettings.xml files in place, we now need to configure our Ant build scripts to interpret the settings and work with our repository during builds.

First we download the Apache Ivy and install into a location each developer's machine can access. This blog posts assumes Ivy v2.2.0 and that the associated ivy-2.2.0.jar has been unzipped to C:\JDeveloper\ivy\apache-ivy-2.2.0:

Next we modify our existing build scripts for each module. In the build.xml file for *both* ADFLibrary1 and ApplicationA we insert the following code:

(Note in the downloadable application this code resides in build.xml, not pre-ivy.build.xml which was documented earlier in this blog post).
<property name="ivy.default.ivy.user.dir" value="C:\JDeveloper\ivy"/>
<property name="ivy.default.ivy.lib.dir" value="C:\JDeveloper\lib"/>
<path id="ivy.lib.path">
<fileset dir="C:\JDeveloper\ivy\apache-ivy-2.2.0" includes="*.jar"/>
</path>
<taskdef resource="org/apache/ivy/ant/antlib.xml" uri="antlib:org.apache.ivy.ant" classpathref="ivy.lib.path"/>
<ivy:configure file="C:\JDeveloper\ivy\ivysettings.xml"/>
<ivy:info file="./ivy.xml"/>
Items to note:

1) Setting the property ivy.default.ivy.user.dir changes the default location under which Ivy stores local copies of the data downloaded from the repository.

2) Setting the property ivy.default.ivy.lib.dir defines the location where the JAR files should be ultimately delivered for dependent modules to make use of.

3) The <ivy:configure> tag tells Ivy where the ivysettings.xml file is located which includes the configuration information about the repositories.

4) The <ivy:info> tag tells Ivy where the current modules ivy.xml file is located.

Configuring Ant to *use* Ivy

With the previous Ivy setup we're now ready to start building using Ivy via Ant.

Let's consider our goals. What we want to first do is build and then publish ADFLibrary1 to the Ivy repository. Then subsequently for ApplicationA we want to download ADFLibrary1 from the Ivy repository, then build ApplicationA.

To achieve the first goal, we already have a Build Ant target in the ADFLibrary1 build.xml. So we just need to add another target "Publish" which will take the artefacts generated from the Build target as follows:
<target name="Publish">
<ivy:publish resolver="repository" overwrite="true" pubrevision="${ivy.revision}" update="true">
<ivy:artifacts pattern="../ViewController/deploy/[artifact].[ext]"/>
</ivy:publish>
</target>
Items to note:

1) The <ivy:publish> tag that says which resolver (ie. which repository) to publish too, what to do if the exact file and revision already exists in the repository, and what revision/version to publish the file as. With regards the ${ivy.revision} this variable is derived from the ADFLibrary1's ivy.xml file.

2) The <artifacts> tag which tells the publish command where to find the artifact to publish.

3) Because of the <artifacts> tag there's a dependency that the module has already been built. This could be easily catered for in the overall build script by making an <antcall> to the Build target at the start of the <ivy:publish> tag, but for purposes of simplicity this change hasn't been made for this blog post.

At this point let's see what outputs we see if we run the Build and Publish scripts. First when we run the Build target the JDeveloper log window reports:
Buildfile: C:\JDeveloper\mywork\ADFLibrary1\Build\build.xml
[ivy:configure] :: Ivy 2.2.0 - 20100923230623 :: http://ant.apache.org/ivy/ ::
[ivy:configure] :: loading settings :: file = C:\JDeveloper\ivy\ivysettings.xml

Build:
[ora:ojdeploy] ----build file----
[ora:ojdeploy] <?xml version = '1.0' standalone = 'yes'?>
[ora:ojdeploy] <ojdeploy-build>
[ora:ojdeploy] <deploy>
[ora:ojdeploy] <parameter name="workspace" value="C:\JDeveloper\mywork\ADFLibrary1\ADFLibrary1.jws"/>
[ora:ojdeploy] <parameter name="project" value="ViewController"/>
[ora:ojdeploy] <parameter name="profile" value="ADFLibrary1"/>
[ora:ojdeploy] <parameter name="outputfile" value="C:\JDeveloper\mywork\ADFLibrary1\ViewController\deploy\ADFLibrary1"/>
[ora:ojdeploy] </deploy>
[ora:ojdeploy] <defaults>
[ora:ojdeploy] <parameter name="statuslogfile" value="C:\Temp\status.log"/>
[ora:ojdeploy] </defaults>
[ora:ojdeploy] </ojdeploy-build>
[ora:ojdeploy] ------------------
[ora:ojdeploy] 07/12/2011 1:31:42 PM oracle.security.jps.util.JpsUtil disableAudit
[ora:ojdeploy] INFO: JpsUtil: isAuditDisabled set to true
[ora:ojdeploy] 07/12/2011 1:31:43 PM oracle.jdevimpl.deploy.fwk.TopLevelDeployer prepareImpl
[ora:ojdeploy] INFO: ---- Deployment started. ----
[ora:ojdeploy] 07/12/2011 1:31:43 PM oracle.jdevimpl.deploy.fwk.TopLevelDeployer printTargetPlatform
[ora:ojdeploy] INFO: Target platform is Standard Java EE.
[ora:ojdeploy] 07/12/2011 1:31:43 PM oracle.jdevimpl.deploy.common.ProfileDependencyAnalyzer deployImpl
[ora:ojdeploy] INFO: Running dependency analysis...
[ora:ojdeploy] 07/12/2011 1:31:43 PM oracle.jdeveloper.deploy.common.BuildDeployer build
[ora:ojdeploy] INFO: Building...
[ora:ojdeploy] Compiling...
[ora:ojdeploy] [1:31:45 PM] Successful compilation: 0 errors, 0 warnings.
[ora:ojdeploy] 07/12/2011 1:31:45 PM oracle.jdevimpl.deploy.common.ModulePackagerImpl deployProfiles
[ora:ojdeploy] INFO: Deploying profile...
[ora:ojdeploy] 07/12/2011 1:31:45 PM oracle.adfdt.controller.adfc.source.deploy.AdfcConfigDeployer deployerPrepared
[ora:ojdeploy] INFO: Moving WEB-INF/adfc-config.xml to META-INF/adfc-config.xml
[ora:ojdeploy]
[ora:ojdeploy] 07/12/2011 1:31:45 PM oracle.jdeveloper.deploy.jar.ArchiveDeployer logFileWritten
[ora:ojdeploy] INFO: Wrote Archive Module to file:/C:/JDeveloper/mywork/ADFLibrary1/ViewController/deploy/ADFLibrary1.jar
[ora:ojdeploy] 07/12/2011 1:31:45 PM oracle.jdevimpl.deploy.fwk.TopLevelDeployer finishImpl
[ora:ojdeploy] INFO: Elapsed time for deployment: 3 seconds
[ora:ojdeploy] 07/12/2011 1:31:45 PM oracle.jdevimpl.deploy.fwk.TopLevelDeployer finishImpl
[ora:ojdeploy] INFO: ---- Deployment finished. ----
[ora:ojdeploy] Status summary written to /C:/Temp/status.log
At the beginning of the output you can see Ivy being initialized but at the moment it's mostly not used. From the output you can see the JAR being built by ojdeploy and placed under C:/JDeveloper/mywork/ADFLibrary1/ViewController/deploy.

Next when we run the Publish task the following output is produced:
Buildfile: C:\JDeveloper\mywork\ADFLibrary1\Build\build.xml
[ivy:configure] :: Ivy 2.2.0 - 20100923230623 :: http://ant.apache.org/ivy/ ::
[ivy:configure] :: loading settings :: file = C:\JDeveloper\ivy\ivysettings.xml

Publish:
[ivy:publish] :: publishing :: sage#ADFLibrary1
[ivy:publish] published ADFLibrary1 to C:\JDeveloper\ivy\repositories\development/ADFLibrary1/jarADFLibrary1_1.jar
[ivy:publish] published ivy to C:\JDeveloper\ivy\repositories\development/ADFLibrary1/ivyADFLibrary1_1.xml
Beyond the initial Ivy setup, of importance we can see the calls to <ivy:publish> pulling the JAR from the previous Build step to the repository. If we look at our C: drive where the repository is located we can indeed see files now sitting in the repository:

The different files are beyond the discussion here, but to say this is the structure Ivy has put into place.

At this point we've achieved our first goal of build and publishing the ADFLibrary1 to the Ivy repository. Let's more over to our second goal for ApplicationA where we want to download ADFLibrary1 from the Ivy repository, then build ApplicationA.

In order to do this we'll add a new target to the ApplicationA build.xml "Download_dependencies" as follows:
<target name="Download_dependencies">
<ivy:cleancache/>
<ivy:resolve/>
<ivy:retrieve pattern="${ivy.default.ivy.lib.dir}/[artifact].[ext]" type="jar"/>
</target>
Of note:

1) The <ivy:cleancache> tag clears the ${ivy.default.ivy.user.dir}\Cache of previously downloaded dependencies. This is only really necessary if when you're uploading dependencies you're not creating new versions, but rather overwriting an existing release. In this later case Ivy will in preference use the cached copy of the JAR rather than retrieving the updated JAR in the repository. Flushing the cache solves this issue as the JARs need to be downloaded each time.

2) The <ivy:resolve> tag which loads the dependency metadata for the current module from the associated ivy.xml file, determines which artefacts to obtain from the repository and downloads them to the ${ivy.default.ivy.user.dir}\Cache directory on the local PC.

3) The <ivy:retrieve> tag then searches the Cache directory for the required JAR files and places them in the location where the application expects to find them, namely C:\JDeveloper\lib

If we run this task we see in the logs:
Buildfile: C:\JDeveloper\mywork\ApplicationA\Build\build.xml
[ivy:configure] :: Ivy 2.2.0 - 20100923230623 :: http://ant.apache.org/ivy/ ::
[ivy:configure] :: loading settings :: file = C:\JDeveloper\ivy\ivysettings.xml

Download_dependencies:
[ivy:resolve] :: resolving dependencies :: sage#ApplicationA;1
[ivy:resolve] confs: [jar, ear]
[ivy:resolve] found sage#ADFLibrary1;1 in repository
[ivy:resolve] downloading C:\JDeveloper\ivy\repositories\development\ADFLibrary1\jarADFLibrary1_1.jar ...
[ivy:resolve] .. (6kB)
[ivy:resolve] .. (0kB)
[ivy:resolve] [SUCCESSFUL ] sage#ADFLibrary1;1!ADFLibrary1.jar (0ms)
[ivy:resolve] :: resolution report :: resolve 109ms :: artifacts dl 0ms
---------------------------------------------------------
| | modules || artifacts |
| conf | number| search|dwnlded|evicted|| number|dwnlded|
---------------------------------------------------------
| jar | 1 | 1 | 1 | 0 || 1 | 1 |
| ear | 1 | 1 | 1 | 0 || 1 | 1 |
---------------------------------------------------------
[ivy:retrieve] :: retrieving :: sage#ApplicationA
[ivy:retrieve] confs: [jar, ear]
[ivy:retrieve] 1 artifacts copied, 0 already retrieved (6kB/0ms)
Beyond the initial configuration of Ivy, in the output you can see the &ivy;resolve> tag resolving the dependency of ApplicationA on ADFLibrary1 version 1, then downloading the file to the cache. Finally the <retrieve> tag retrieves the file from the cache and copies it to the local lib directory (though this isn't that obvious from the logs).

If you now Build ApplicationA you will see it compiles correctly. To check it doesn't build when the ADFLibrary1.jar is not sitting in the C:\JDeveloper\lib directory, delete the JAR and rebuild ApplicationA.

Making and building with new revisions

Overtime your modules will include new revisions. You will of course be checking these changes in and out of your version control system such as Subversion. How do you cater for the new versions with regards to Ivy?

Imagine the scenario where ApplicationA release 3 is now dependent on ADFLibrary1 revision 6. This requires two simple changes.

Firstly in the ADFLibrary1 ivy.xml, replace the revision number under the <info> tag to 6, build and publish.

Second in ApplicationA's ivy.xml, update it's revision number to 3, then in the <dependencies> tag update the ADFLibrary1 dependency's revision number to 6. Forthright when you download the dependencies for ApplicationA revision 3, it will download revision 6 from the repository.

Conclusion

At this point we have all the moving parts to make use of Ivy with JDeveloper and ADF to implement a dependency management solution. While the example is contrived, it shows the use of:

1) The ivy.xml file to define each module, what it publishes and what it depends on
2) The ivysettings.xml file to define the location of the shared repository
3) The Ivy Ant tasks for publishing modules to the repository, as well as downloading modules from the repository

If I have time I will write a different blog post to show how transitive dependencies work in Ivy. The nice thing about Ivy is it handles these automagically so there's not much to configure, just explain a working example.

Beyond this there really isn't that much else to explain, working out the nuances of Ivy takes around a week, retrofitting it into your environment takes longer, but beyond that Ivy is pretty simple in that it does one thing and it does one thing well.

Finally note I'm not advocating Apache Ivy over Apache Maven with this post, ultimately this post simply documents how to use Ivy with JDeveloper, and readers need to make their own choice which tool if any to use. Future versions of JDeveloper (See the Maven integration section in the following blog post) are scheduled to have improved Maven integration so readers should take care not to discount Maven as an option.

Errata

This was tested against JDev 11.1.2.1.0 and 11.1.1.4.0, but in essence should run against any JDev version with Ant support.

Tuesday, 22 November 2011

ADF bug: missing af:column borders in af:table for IE7

There’s a rather obscure JDeveloper bug that only effects IE7, for af:columns in af:tables that show af:outputText fields based on dates that are null (phew, try and say that with a mouth full of wheaties). It occurs in 11.1.1.4.0 and 11.1.2.0.0 (and all versions in between it’s assumed).

In the previous picture from IE7 if you look closely, you’ll notice that the HireDate2 column has lost its border for the null entries. Note the other columns even when they are null, still have a border.

If we look under IE8 (or any other browser for that matter) we see the problem doesn’t occur at all:

The problem is being caused by 2 separate issues:

1) IE7 does not render borders for HTML table cells (ie. the tag) if the cell contains no data. This can be fixed if the cell contains a &nbsp; tag.

2) ADF Faces RC includes the &nbsp; tag for empty table cells, except for null date af:outputText fields who in addition have child tags that aren’t converter and validator tags.

To demonstrate the bug the MissingTableBorders application includes a simple test case. The application contains a View Object “EmployeesView” with a query based on the Oracle HR sample schema:
SELECT emp.EMPLOYEE_ID, 
(CASE WHEN employee_id < 105 THEN first_name ELSE null END) AS FIRST_NAME1,
(CASE WHEN employee_id < 105 THEN first_name ELSE null END) AS FIRST_NAME2,
(CASE WHEN employee_id < 105 THEN hire_date ELSE null END) AS HIRE_DATE1,
(CASE WHEN employee_id < 105 THEN hire_date ELSE null END) AS HIRE_DATE2
FROM EMPLOYEES emp
WHERE emp.EMPLOYEE_ID BETWEEN 100 AND 110
ORDER BY emp.EMPLOYEE_ID
The query is designed to return two String columns that will have a mix of null and non null values, and two date columns that will also have a mix of null and non null values. If we run the Business Components Browser the data appears as follows:

Next is the code for our JSPX page “Employees.jspx” containing an af:table based on the VO from above. I’ve deliberately cut out the surroundings tags to focus on the tags that matter:
<af:table
value="#{bindings.EmployeesView1.collectionModel}"
var="row"
rows="#{bindings.EmployeesView1.rangeSize}"
emptyText="#{bindings.EmployeesView1.viewable ? 'No data.' : 'Access Denied.'}"
fetchSize="#{bindings.EmployeesView1.rangeSize}"
rowBandingInterval="0"
selectedRowKeys="#{bindings.EmployeesView1.collectionModel.selectedRow}"
selectionListener="#{bindings.EmployeesView1.collectionModel.makeCurrent}"
rowSelection="single"
id="t1">
<af:column
sortProperty="EmployeeId"
sortable="false"
headerText="#{bindings.EmployeesView1.hints.EmployeeId.label}"
id="c5">
<af:outputText value="#{row.EmployeeId}" id="ot4">
<af:convertNumber groupingUsed="false" pattern="#{bindings.EmployeesView1.hints.EmployeeId.format}"/>
</af:outputText>
</af:column>
<af:column
sortProperty="FirstName1"
sortable="false"
headerText="#{bindings.EmployeesView1.hints.FirstName1.label}"
id="c4">
<af:outputText value="#{row.FirstName1}" id="ot5">
</af:outputText>
</af:column>
<af:column
sortProperty="FirstName2"
sortable="false"
headerText="#{bindings.EmployeesView1.hints.FirstName2.label}"
id="c3">
<af:outputText value="#{row.FirstName2}" id="ot2">
<af:clientAttribute name="ItemValue" value="#{row.FirstName2}"/>
</af:outputText>
</af:column>
<af:column
sortProperty="HireDate1"
sortable="false"
headerText="#{bindings.EmployeesView1.hints.HireDate1.label}"
id="c1">
<af:outputText value="#{row.HireDate1}" id="ot3">
<af:convertDateTime pattern="#{bindings.EmployeesView1.hints.HireDate1.format}"/>
</af:outputText>
</af:column>
<af:column
sortProperty="HireDate2"
sortable="false"
headerText="#{bindings.EmployeesView1.hints.HireDate2.label}"
id="c2">
<af:outputText value="#{row.HireDate2}" id="ot1">
<af:convertDateTime pattern="#{bindings.EmployeesView1.hints.HireDate2.format}"/>
<af:clientAttribute name="ItemValue" value="#{row.HireDate2}"/>
</af:outputText>
</af:column>
</af:table>
The code was created by JDeveloper by drag and dropping the VO from the data control palette, with the following changes:

a) There are two columns to display data from the first_name column. The only difference between them is the first_name2 column includes an additional af:clientAttribute tag.

b) There are two columns to display data from the hire_date column. Similar to the first_name columns, they only differ in the fact hire_date2 includes an af:clientAttribute tag.

When this page renders in the browser the generate HTML content for the rows of the table are as follows (note the formatting and the comment were added by me to make it easier to read):
<tbody>
<!-- ---------- Record 100 ---------- -->
<tr _afrrk="0" class="xxy ">
<td style="width:100px;" nowrap="" class="xxv"><nobr>100</nobr></td>
<td style="width:100px;" nowrap="" class="xxv"><nobr>Steven</nobr></td>
<td style="width:100px;" nowrap="" class="xxv"><nobr><span id="t1:0:ot2">Steven</span></nobr></td>
<td style="width:100px;" nowrap="" class="xxv"><nobr>17/06/1987</nobr></td>
<td style="width:100px;" nowrap="" class="xxv"><nobr><span id="t1:0:ot1">17/06/1987</span></nobr></td>
</tr>
<!-- ---------- Record 101 ---------- -->
<tr _afrrk="1" class="xxy">
<td nowrap="" class="xxv"><nobr>101</nobr></td>
<td nowrap="" class="xxv"><nobr>Neena</nobr></td>
<td nowrap="" class="xxv"><nobr><span id="t1:1:ot2">Neena</span></nobr></td>
<td nowrap="" class="xxv"><nobr>21/09/1989</nobr></td>
<td nowrap="" class="xxv"><nobr><span id="t1:1:ot1">21/09/1989</span></nobr></td>
</tr>
<!-- ---------- Record 102 ---------- -->
<tr _afrrk="2" class="xxy">
<td nowrap="" class="xxv"><nobr>102</nobr></td><td nowrap="" class="xxv"><nobr>Lex</nobr></td>
<td nowrap="" class="xxv"><nobr><span id="t1:2:ot2">Lex</span></nobr></td>
<td nowrap="" class="xxv"><nobr>13/01/1993</nobr></td>
<td nowrap="" class="xxv"><nobr><span id="t1:2:ot1">13/01/1993</span></nobr></td>
</tr>
<!-- ---------- Record 103 ---------- -->
<tr _afrrk="3" class="xxy">
<td nowrap="" class="xxv"><nobr>103</nobr></td>
<td nowrap="" class="xxv"><nobr>Alexander</nobr></td>
<td nowrap="" class="xxv"><nobr><span id="t1:3:ot2">Alexander</span></nobr></td>
<td nowrap="" class="xxv"><nobr>3/01/1990</nobr></td>
<td nowrap="" class="xxv"><nobr><span id="t1:3:ot1">3/01/1990</span></nobr></td>
</tr>
<!-- ---------- Record 104 ---------- -->
<tr _afrrk="4" class="xxy">
<td nowrap="" class="xxv"><nobr>104</nobr></td>
<td nowrap="" class="xxv"><nobr>Bruce</nobr></td>
<td nowrap="" class="xxv"><nobr><span id="t1:4:ot2" class="">Bruce</span></nobr></td>
<td nowrap="" class="xxv"><nobr>21/05/1991</nobr></td>
<td nowrap="" class="xxv"><nobr><span id="t1:4:ot1">21/05/1991</span></nobr></td>
</tr>
<!-- ---------- Record 105 ---------- -->
<tr _afrrk="5" class="xxy"><td nowrap="" class="xxv"><nobr>105</nobr></td>
<td nowrap="" class="xxv"><nobr></nobr> </td>
<td nowrap="" class="xxv"><nobr><span id="t1:5:ot2"></span></nobr> </td>
<td nowrap="" class="xxv"><nobr></nobr> </td>
<td nowrap="" class="xxv"><nobr><span id="t1:5:ot1"></span></nobr></td>
</tr>
<!-- ---------- Record 106 ---------- -->
<tr _afrrk="6" class="p_AFSelected p_AFFocused xxy">
<td nowrap="" class="xxv"><nobr>106</nobr></td><td nowrap="" class="xxv"><nobr></nobr> </td>
<td nowrap="" class="xxv"><nobr><span id="t1:6:ot2"></span></nobr> </td>
<td nowrap="" class="xxv"><nobr></nobr> </td>
<td nowrap="" class="xxv"><nobr><span id="t1:6:ot1"></span></nobr></td>
</tr>
<!-- ---------- Record 107 ---------- -->
<tr _afrrk="7" class="xxy"><td nowrap="" class="xxv"><nobr>107</nobr></td>
<td nowrap="" class="xxv"><nobr></nobr> </td>
<td nowrap="" class="xxv"><nobr><span id="t1:7:ot2"></span></nobr> </td>
<td nowrap="" class="xxv"><nobr></nobr> </td>
<td nowrap="" class="xxv"><nobr><span id="t1:7:ot1"></span></nobr></td>
</tr>
<!-- ---------- Record 108 ---------- -->
<tr _afrrk="8" class="xxy">
<td nowrap="" class="xxv"><nobr>108</nobr></td>
<td nowrap="" class="xxv"><nobr></nobr> </td>
<td nowrap="" class="xxv"><nobr><span id="t1:8:ot2"></span></nobr> </td>
<td nowrap="" class="xxv"><nobr></nobr> </td>
<td nowrap="" class="xxv"><nobr><span id="t1:8:ot1"></span></nobr></td>
</tr>
<!-- ---------- Record 109 ---------- -->
<tr _afrrk="9" class="xxy">
<td nowrap="" class="xxv"><nobr>109</nobr></td>
<td nowrap="" class="xxv"><nobr></nobr> </td>
<td nowrap="" class="xxv"><nobr><span id="t1:9:ot2"></span></nobr> </td>
<td nowrap="" class="xxv"><nobr></nobr> </td>
<td nowrap="" class="xxv"><nobr><span id="t1:9:ot1"></span></nobr></td>
</tr>
<!-- ---------- Record 110 ---------- -->
<tr _afrrk="10" class="xxy">
<td nowrap="" class="xxv"><nobr>110</nobr></td>
<td nowrap="" class="xxv"><nobr></nobr> </td>
<td nowrap="" class="xxv"><nobr><span id="t1:10:ot2"></span></nobr> </td>
<td nowrap="" class="xxv"><nobr></nobr> </td>
<td nowrap="" class="xxv"><nobr><span id="t1:10:ot1"></span></nobr></td>
</tr>
</tbody>
If you look at records 100 to 104 all columns include data.

If we look at records 105 to 110 note:

a) The FirstName1 column when null includes a &nbsp; to forcefully place a blank entry into the cell.

b) The FirstName2 column does exactly the same, remembering the FirstName2 column includes an additional af:clientAttribute tag.

c) For the HireDate1 column it also includes a &nbsp;. Remember the HireDate1 column *does*not* include an af:clientAttribute tag.

d) For the HireDate2 column it *does*not* include an &nbsp; tag, even though the HireDate2 values are null. Remember the HireDate2 column *does*include* an af:clientAttribute tag.

At this point we only see differing behaviour with af:outputText values in af:columns where they show Dates *and* the af:outputText includes an af:clientAttribute tag.

From my testing, converter and validator tags added to the af:outputText don't exhibit the same behaviour. However any other tag, not just adding an af:clientAttribute tag but even an af:clientListener as example will result in the missing &nbsp; tag.

This in itself isn't an issue but when we consider IE7. If you render this page in IE8 the null date columns with an af:clientAttribute tag will still show the cell borders:

Yet in IE7 we get this:

While the issue is particular to IE7, the issue could be fixed by ADF Faces RC consistently generating the &nbsp; entry as described in the HTML generated above.

In discussing this bug (12942411) with Oracle staff it turns out there is a broader base bug 9682969 where this issue occurs for more than just date columns. Unfortunately the problem is not easily fixable by Oracle as it requires the af:table and af:column components to know if the child component (in this example an af:outputText) will be null before it and the data it refers to is accessed and rendered.

The simple workaround as proposed by Oracle is to not render the child component at all if the data value is null, simply by including code similar to the following:
My thanks to Oracle staff who assisted in looking and resolving this issue.

A sample application can be downloaded from here.

Wednesday, 9 November 2011

ADF Faces - a logic bomb in the order of bean instantiations

One of my talented colleagues discovered an interesting ADF logic bomb which I thought I'd share here. The issue is with the instantiation order of ADF Faces scoped beans in JDev 11g when using Bounded Task Flows embedded as regions in another page.

Regular readers would be familiar that Oracle's ADF solution is built on top of JavaServer Faces (JSF). ADF supports bean scopes such as ViewScope, PageFlowScope and BackingBeanScope on top of the regular JSF ApplicationScope, SessionScope and RequestScope beans. Readers should also be familiar that the beans have a defined life (ie. scope) as detailed in the JDev Web Guide:

(Source: Oracle JDeveloper Web Guide 11.1.2.1 section 5.6 figure 5-11)

In specifically focusing on the life cycle of ADF PageFlowScope and BackingBeanScope beans, the guide states (to paraphrase):

1) A PageFlowScope bean for a Task Flow (either bounded or unbounded) survives for the life of the task flow.

2) A BackingBeanScope bean for a page fragment will survive from when receiving a request from the client and sending a response back.

The implication of this is when we're using Bounded Task Flows (BTFs) based on page fragments, it's typical to have a PageFlowScope bean to accept and carry the parameters of the BTF, and one or more BackingBeanScope beans for each fragment within the BTF.

Sample Application

With this in mind let's explore a simple application that shows this sort of usage in play. You can download the sample application from here.

The application's Unbounded Task Flow (UTF) includes a single SessionScope bean MySessionBean carrying one attribute mySessionBeanString as follows:
public class MySessionBean {

private String mySessionBeanString = "mySessionBeanValue";

public MySessionBean() {
System.out.println("MySessionBean initialized");
}

public void setMySessionBeanString(String mySessionBeanString) {
this.mySessionBeanString = mySessionBeanString;
}

public String getMySessionBeanString() {
return mySessionBeanString;
}
}
Note the System.out.println on the constructor to tell us when the bean is instantiated.

The UTF also includes a single page MyPage.jspx containing the following code:













In MyPage.jspx note the inputText control to output the SessionScope String from MySessionBean, and a commandButton to submit any changes. Second note the region containing a call to a BTF entitled FragmentBTF. However let's put the region aside for a moment and talk about the inputText and SesssionScope bean.

When this page is first rendered the inputText makes reference to the SessionScope variable. JSF doesn't pre-initialize managed beans, only on first access do they get instantiated. As such as soon as the inputText is rendered we should see the message from the MySessionBean constructor when it accesses the mySessionBeanString via the EL expression:

MySessionBean initialized

If we were to comment out the region, run this page and press the commandButton, we would only see the initialized message once, as the session bean lives for the life of the user's session.

Now let's move onto considering the region and embedded Bounded Task Flow (BTF) called FragmentBTF.xml. Points of note for the BTF are:

a) The Task Flow binding has its Refresh property = ifNeeded

b) And the BTF expects a parameter entitled btfParameterString, which takes its value from our SessionScope beans variable:

In terms of the FragmentBTF (as separate to the region/task flow binding) it has the following characteristics:

a) The BTFs has its initializers and finalizers set to call a "none" scope TaskFlowUtilsBean to simply print a message when the task flow is initialized and finalized. This will help us to understand when the BTF restarts and terminates.

b) The BTF has one incoming parameter btfParameterString. To store this value the BTF has its own PageFlowScope bean called MyPageFlowBean, with a variable myPageFlowBeanString to carry the parameter value.
public class MyPageFlowBean {

private String myPageFlowBeanString;

public MyPageFlowBean() {
System.out.println("MyPageFlowBean initialized");
}

public void setMyPageFlowBeanString(String myPageFlowBeanString) {
this.myPageFlowBeanString = myPageFlowBeanString;
}

public String getMyPageFlowBeanString() {
return myPageFlowBeanString;
}
}
Again note the System.out.println to help us understand when the PageFlowScope bean is initialized.

c) The BTF contains a single fragment MyFragment.jsff with the following code:








Inside the fragment are:

c.1) An inputText to output the current value for the MyPageFlowBean.myPageFlowBeanString. Remember this value is indirectly derived from the btfParamaterString of the BTF.

c.2) A second inputText to output the value from another bean, this time a BackingScopeBean which we consider next.

d) The BackingBeanScope bean is where we'll see some interesting behaviour. Let's explain its characteristics first:

d.1) The BackingBeanScope bean entitled MyBackingBean is managed by the BTF.

d.2) It is only referenced by the MyFragment.jsff within the BTF, via the inputText above in c.2.

d.3) The BackingBeanScope bean has the following code which includes the usual System.out.println on the constructor:
public class MyBackingBean {

private String myBackingBeanString;
private MyPageFlowBean myPageFlowBean;

public MyBackingBean() {
System.out.println("MyBackingBean initialized");

myPageFlowBean = (MyPageFlowBean)resolveELExpression("#{pageFlowScope.myPageFlowBean}");

myBackingBeanString = myPageFlowBean.getMyPageFlowBeanString();
}

public static Object resolveELExpression(String expression) {
FacesContext fctx = FacesContext.getCurrentInstance();
Application app = fctx.getApplication();
ExpressionFactory elFactory = app.getExpressionFactory();
ELContext elContext = fctx.getELContext();
ValueExpression valueExp = elFactory.createValueExpression(elContext, expression, Object.class);
return valueExp.getValue(elContext);
}

public void setMyBackingBeanString(String myBackingBeanString) {
this.myBackingBeanString = myBackingBeanString;
}

public String getMyBackingBeanString() {
return myBackingBeanString;
}
}
d.4) It contains a variable myBackingBeanString which is referenced via an EL expression by the inputText explained in c.2.

d.5) However note that the constructor of the bean grabs a reference to the PageFlowScope bean and uses that to access the myPageFlowBeanString value, writing the value to the myBackingBeanString.

While this example is abstract for purposes of this blog post, it's not uncommon in context of a BTF for a backing bean to want to access state from the BTF's page flow scope bean. As such the technique is to use the JSF classes to evaluate an EL expression to return the page flow scope bean. This is what the resolveELExpression method in the backing bean does, called via the constructor and given to a by-reference-variable in the backing bean to hold for its life/scope.

At this point we have all the moving parts of our very small application.

Scenario One - Initialization

Let's run through the sequence of events we expect to occur when this page renders for the first time agaom, this time including the BTF-region processing as well as the parent page's processing:

1) From earlier we know that when the page first renders we'll see the SessionScope bean's constructor logged as the inputText in MyPage.jspx accesses mySessionBeanString.

MySessionBean initialized

2) Next as the region in MyPage.jspx is rendered, the FragmentBTF is called for the first time and we can see two log messages produced:

MyPageFlowBean initialized
Task Flow initialized


It's interesting we see the PageFlowScope bean instantiated before the Task Flow but this makes sense as the MySessionBean.mySessionBeanString needs to be passed as a parameter to the BTF before the BTF actually starts.

3) As the MyFragment.jsff renders for the first time, we then see the MyBackingBean initialized for the first time:

MyBackingBean initialized

So to summarize at this point by just running the page and doing nothing else, the following log entries will have been shown:

MySessionBean initialized
MyPageFlowBean initialized
Task Flow initialized
MyBackingBean initialized


In turn the page looks like this, note the value from the MySessionBean pushing itself to the MyPageFlowBean and the MyBackingBean:

The order the beans are instantiated and the values down the page all makes logical sense when we've an understanding of how the page is rendered.

Scenario 2 - The logic bomb

With the page now running we'll now investigate how there's a bomb in our logic.

Up to now if we've been developing an application based on this structure, we've probably run this page a huge amount of times and seen the exact same order from above. One of the problems for developers is we start and stop our application so many times, we don't use the system like a real user does where the application is up and running for a long time. This can hide issues from the developers.

With our page running, say we want to change the SessionScope bean's value. Easy enough to do, we simply change the value in the mySessionBeanString inputText:

When we press the "Page Submit" commandButton embedded in MyPage.jspx, given our understanding so far we'd expect the following behaviour:

1) As the session scope bean lives for the life of the user's session, we don't expect to see the bean newly instantiated.

2) Because the region's task flow binding refresh property is set to ifNeeded, and the source value of the btfParameterString has been updated, we expect the BTF to restart. As the content of the region are executed, based on our previous understanding logically we should see the following log entries:

Task Flow finalized
MyPageFlowBean initialized
Task Flow initialized
MyBackingBean initialized


(Note the Task Flow finalized message first. This is separate to this discussion but given the BTF is restarting, the previous BTF instance need to be finalized first).

Yet the actual log entries we see are:

MyBackingBean initialized
Task Flow finalized
MyPageFlowBean initialized
Task Flow initialized


And the resulting page looks like this:

Something definitely fishing is going on here. In the logs we see the BackingBean is now initialized before the previous Task Flow is finalized and before the PageFlowScope bean is instantiated. In turn on the running page we can see the "fishy" value has made it to the PageFlowScope bean but not the BackingBean. What's going on?

The explanation is simple enough based on 2 rules we've established in this post:

1) Firstly we know beans are only instantiated on access.

2) In returning to the Oracle documentation the scope of a BackingBean is:

"A BackingBeanScope bean for a page fragment will survive from when receiving a request from the client and sending a response back."

With these two rules in mind, when we consider the first scenario, the reason the BackingBean is instantiated after the PageFlowScope bean is because the contents of the BTF fragment are rendered after the BTF is started. As such the access to the BackingBean is *after* the PageFlowScope bean as the fragment hasn't been rendered yet.

With the second scenario, as the fragment is already rendered on the screen, the reason the BackingBean is instantiated before the PageFlowScope bean (and even the termination and restart of the BTF) is the incoming request from the user will reference the BackingBean (maybe writing values back to the bean) causing it to initialize at the beginning of the request as per rule 2 above. Officially "Erk!". Then as the BackingBean in its constructor references the PageFlowScope bean, it gets a handle on the previous BTF instance's PageFlowScope bean as the new BTF instance has yet to start and create a new PageFlowScope instance with the new value passed from the caller, and thus why we see the old value in the page for myBackingBeanString.

The specific coding mistake in the examples above is the retrieval of the PageFlowScope bean in the BackingBeanScope's constructor. The solution is that any methods of the BackingBeanScope that require the value from the PageFlowScope should resolve access to the PageFlowScope each time it's required, not once in the constructor. If you consider the blog post References to UIComponents in Session-Scope beans by Blake Sullivan you'll see a number of techniques for solving this issue.

Conclusion

Understanding the scope of beans is definitely important for JSF and ADF programming. But the scope of a bean doesn't imply the order of instantiation, and the order of instantiation is not guaranteed so we need to be careful our understanding doesn't make assumptions about when beans will be in/out of scope.

Readers who are familiar with JSF2.0 know that CDI and Annotations will work around these issues. However for ADF programmers CDI for the ADF scoped beans is currently not a choice (though this may change). See the Annotations section of the ADF-JSF2.0 whitepaper from Oracle.

Errata

This blog post was written against JDev 11.1.1.4.0.

Thursday, 13 October 2011

PageFlowScope with Unbounded Task Flows: the magic sauce for multi-browser-tab support in JDeveloper ADF applications

Within JDev 11g+ experienced ADF programmers will be familiar with PageFlowScope beans used by tasks flows, in particular Bounded Task Flows (BTFs) where they provide the equivalent of session scope for variables for the life of the BTF for a specific user session. Indeed the Oracle documentation says the following about PageFlowScope beans:
Choose this scope if you want the managed bean to be accessible across the activities within a task flow. A managed bean that has a pageFlow scope shares state with pages from the task flow that access it. A managed bean that has a pageFlow scope exists for the life span of the task flow
Source: Oracle Fusion Dev Guide 11.2.1 Section 18.2.4 What You May Need to Know About Memory Scope for Task Flows

Given we know BTFs have a distinct beginning and end for each user session, a "life span" as such, and conversely Unbounded Task Flows (UTFs) live for the life of the application which is nearly forever, it would appear that PageFlowScope beans only apply to BTFs. However PageFlowScope beans provide some magic sauce with UTFs that shouldn't be ignored. Before we can have a look at this magic sauce we need to cover some background on modern browsers.

Multi-tab browsers and the challenge for web applications

Readers will be familiar that over the last several years browsers have increased in sophistication, providing users with more and more features. One such feature is that of tabs, more commonly referred to as multi-tab browsing. Back in the dim dark ages of the Internet (circa 2005?) if users wanted to surf more than one website at a time they needed to open multiple instances of their browser. Typically each browser instance took out a single connection and server-side session (assuming a stateful application) with whichever server they were visiting. If the user had multiple instances open to the same website this resulted in the same amount of connections and sessions.

At some point in time web browsers introduced the support for tabs, allowing within the one browser instance the user to surf multiple websites in separate tabs. I'll take a guess and say the browser authors when introducing this feature had a careful think about how users visit websites, and they realized users when searching for information on websites might spawn several tabs all visiting the same website but each tab viewing different pages within that single website.

So browsers introduced a feature set to give users the ability to search for information even faster, yet the browser vendors also recognized an issue. Potentially users were now relatively free to spawn lots of tabs (and if you're like some users you keep on spawning tabs, never closing any, until you shut down the browser). If the old regime of a connection per tab was followed at the client (browser) side, and a session per connection at the server (website) side, computing resources would be strained.

How to fix? Simple really, per website on the browser side, regardless the number of tabs open to a website via a browser, it should share the connections across the tabs. Result? Instant resource saving on the client. In turn on the server side as there isn't any default way to identify the different tabs as their separate requests hit the server through the same connection, the server too need only store 1 session.

Today from the users' perspective (at least the tech savvy users) multi-browser tabs is an expected feature and one when not available for whatever reason causes frustration either with the browser or the website they are using. This implies any web application we build really needs to support (or at least not hinder) this functionality.

What's this got to do with ADF?

The question is probably rhetorical for readers at this point, but what's this got to with ADF?

Traditionally JavaServer Faces (JSF) programmers and ADF programmers have been taught if you want to maintain state (variables) for the life of a user's session you put it in a SessionScope bean. Examples of such variables include the time the user first accessed the system, the customer ID of the current customer that the user is working with on the phone, or the items in a shopping cart. Definitely these should go in SessionScope.

Or should they?

With modern browsers what should happen with these variables when the user spawns two tabs to our application sharing the same connection/session? Looking at our previous examples it's easy to agree the time the user first accessed the system would be one and the same across both tabs. But what about our other examples?

For our customer ID example, reasonably in some applications you might want two different tabs to show two different pages for information on the same customer, so SessionScope seems reasonable enough. But alternatively imagine you're a call centre operator, taking a call from one customer while finishing recording data about the previous customer. It would be mighty handy to have two browser tabs to do this. As a result the separate tabs really require their own SessionScope customer ID. How to do that?

The question gets murkier when considering shopping carts. For an Amazon user having two separate carts on two separate browser tabs would seem undesirable. But let's take another call centre example where an operator is taking a phone call from a customer wanting to place two separate orders. Things get tricky if the customer is moving items between the orders, so different tabs supporting different orders could be desirable. Again the vanilla SessionScope bean won't solve this.

So we can see sometimes depending on the needs of the application, we need to provision session state for separate browser tabs. At the moment JSF1.X/2.0 has no inbuilt solution for this problem (there's been much discussion about including a new ConversationScope bean in the past), but as you can probably guess a PageFlowScope at the UTF level in ADF solves this problem. For every browser tab opening a page in the UTF that references a managed PageFlowScope bean a separate instance of the bean will be created.

The hint of the secret life of the PageFlowScope is revealed in Section 5.7 Passing Values Between Pages of the JDev Web Guide 11.1.2 where it states:
The ADF Faces pageFlowScope scope makes it easier to pass values from one page to another, thus enabling you to develop master-detail pages more easily. Values added to the pageFlowScope scope automatically continue to be available as the user navigates from one page to another, even if you use a redirect directive. But unlike session scope, these values are visible only in the current page flow or process. If the user opens a new window and starts navigating, that series of windows will have its own process. Values stored in each window remain independent.
Specifically note the last 2 sentences.

How does ADF technically solve identifying the separate tabs?

Earlier on we mentioned from the server's point of view, because multi-browser tabs share the same connection with the server, by default the server has no inherit mechanism to differentiate the separate browser tabs and therefore no mechanism to know when to spawn separate PageFlowScope beans. So how does ADF actually technically solve identifying the separate tabs?

Imagine you've created an ADF application to lodge infringements. Some infringements take seconds to fill out and complete, others take time to gather the data, requiring multiple tabs to enter more than one infringement at a time.

For such an ADF application we could serve the service via a link on our portal such as:

http://www.wewantyourmoney.com/infringements/ticket

In this case ticket represents a JSPX file in our UTF of our application.

On accessing the URL for the first time regular ADF users will know the server in responding with the required page will also replace the URL with something like the following:

http://www.wewantyourmoney.com/infringements/ticket?_adf.ctrl-state=p1zuym5lv_3

This parameter and its value is also buried in the HTML source to be used on the next request. Also inside the page source is a form parameter Adf-Window-Id with a unique value given by the server.

When the page is submitted along with the hidden _adf.ctrl-state and Adf-Window-Id parameter gained from the previous server response, this gives ADF a mechanism for tracking the current session and current page instance.

(Side note: Behind the scenes the server is smart enough to check the session parameters against the previous known connection/session to stop intruders impersonating another user's session ... you can test this by intercepting the next request before it goes out and changing the _adf.ctrl-state parameter before it hits the server. ADF will complain displaying the following error message "ADFC-12000: State ID in request is invalid for the current session.")

If the server receives another request for the same "naked" URL from the same connection/session, ADF simply assumes a new tab instance, spawns a separate PageFlowScope bean instance, and returns separate _adf.ctrl-state and Adf-Window-Id values in the response. Any subsequent requests to the server from separate multiple tabs for the same user connection are therefore easily separated and the correct PageFlowScope bean instance used.

Alternatively if the user tries to trick the server by copying the URL from another tab with the _adf.ctrl-state variable in the URL, but obviously missing a payload containing the _adf.ctrl-state form parameter and the Adf-Window-Id parameter, again ADF is smart enough to detect this as a new browser tab and spawns a new PageFlowScope bean. This piece of logic is important because it's possible for users to bookmark the current page's URL with the URL parameter, and attempt to come back to it after a spawning a new tab. As such we wouldn't want ADF to be tricked by this simple common use case into thinking the user is in the same original tab when in fact they're launching the application from an old bookmark.

Finally it should be noted if the user session times out or logs out, the PageFlowScope bean will fall out of scope and any reference to the PageFlowScope there after will result in new instance of the bean created.

Demonstration

The following link provides a small demo application from JDev 11.1.2. To set this application up you need to:

a) unzip it and open it in JDev
b) in your integrated WLS server create a user account which you will use to log in to the application

The application is comprised of 4 pages:

a) an unauthenticated Splash page to start the application
b) 2 separate authenticated pages named FirstPage and SecondPage that will demonstrate the bean scopes during an authenticated session
c) an unauthenticated ExitPage which will be called when the user logs out of SecondPage

From here run the Splash.jsf page. Your browser will eventually open with the following URL:

http://localhost:7101/MultiBrowserTabExample/faces/Splash

Select the Go First Page button, at the login enter your credentials, upon which you'll see First Page:

The two fields have default values gathered from two separate beans, one a SessionScope bean and the other a PageFlowScope bean. Note in the JDev log window you'll see log entries from the constructors of both beans, implying they were just instantiated to show the values on the page, and thus why the default values are shown.

Selecting the Go Second Page button takes us to the Second Page where we have the option to change the values:

As example lets change the value for the SessionScope to Alpha and the PageFlowScope value to Beta:

On returning to the first page through the associated button we see that Alpha and Beta values have been successfully carried across requests

Now open a new browser tab to original Splash page using the following URL:

http://localhost:7101/MultiBrowserTabExample/faces/Splash

Diagrammatically to show this, what I've done in the following set of pictures is put the original tab to the left of the image, and the new tab to the right of the image so I could show both tabs at the same time. As such we now have:

In the second tab if we then select the Go First Page button we bypass the login screen automatically as the user is already logged in, and we arrive on the SecondPage as follows:

In the JDev log window we see a new instance of the PageFlowScope bean has been created, but not the SessionScope bean. This backs up what is shown in the new second tab because we can see that the SessionScope value is shared across both tabs, but the PageFlowScope value isn't and we've reverted to the default value (provided through the new PageFlowScope instance).

From here in the second tab we can navigate to the Second Page and update the values of the SessionScope and PageFlowScope beans to Charlie and Delta respectively:

In the second tab if we then return to the FirstPage we see that it maintains the Charlie and Delta values:

...and more importantly in the first tab if we then navigate to the Second Page we see:

In this picture we can see that the SessionScope Charlie value has been shared across both tabs, but the first tab has retained it's original Beta values for it's PageFlowScope bean, showing that the PageFlowScopes are indeed separate across browser tabs.

To conclude the examples, if in the first tab we select the Logout button we automatically move to the ExitPage and we see:

This shows both the SessionScope and PageFlowScope bean resetting back to the original values. The JDev log window also logs the constructor calls of both beans verifying that the old beans have been completed and 2 new instances created.

Finally if we return to the second tab that is still sitting on the First Page, any action results in ADF throwing an exception "java.lang.IllegalStateException: No window for windowId:w2". To be honest I expected an error here because the user behind the scenes has logged out, but this error message looks less than useful. Presumably we need to take care of this specific error in some manner, probably with a task flow Exception handler, but is beyond the scope of this post.

Not all things are born equal - final note on SessionScope vs PageFlowScope

Don't get confused on the use of SessionScope and PageFlowScope beans. Not everything you would have traditionally put in SessionScope need now go in a UTF PageFlowScope bean. How many frequent flyer miles the user has left, the user's preferences and their birthday are good candidates for SessionScope. But if across tabs you will allow the user to test separate values for projecting sales figures, a counter for the numbers of steps completed in submitting separate expense reports or even the tiles placed in a game of tic tac toe, PageFlowScope will be more appropriate.

Tuesday, 11 October 2011

ADF take aways from Oracle Open World 2011

With the huge amount of sessions at Oracle Open World, it’s often hard to find the little gems of information amongst all the marketing. This is true of ADF like all other technologies at the conference, there’s simply a lot of information to digest and filter. Luckily Oracle publishes the presentations PPTs afterwards and it’s possible to find a jewel or two in all the content with some careful searching.

For the ADF developers among us, this blog post attempts to summarize some of the main ADF takeaways from Oracle Open World 2011. Please remember this is my summary, not Oracle’s (I am not an Oracle employee), and Oracle publishes all of this content under the Safe Harbor statement which means they cannot be held to anything they published.

All the links in this post are not guaranteed to be up forever as Oracle may remove them in the near future. I suggest if you're interested in reading the presentations download them now.

Finally I apologize for some of the clunky grammer and phrases in this post, I wrote it on the plane back to Australia with the usual jetlag that fogs the brain.

ADF Mobile

Of the large announcements at Oracle Open World 2011, the soon-to-be-released (2012) Mobile edition of ADF was the most significant in the ADF space. Some key points of the new platform is it supports both iOS and Android, runs on device with a mini JVM, and uses PhoneGap to allow the native app to access the device’s native facilities.

For me the most telling part was the architecture diagram from the Develop Mobile Apps for iOS, Android, and More: Converging Web and Native Applications presentation by Oracle Corporation’s Joe Huang, Denis Tyrell, and Srini India:

Data Visualization Controls

Katarina Obradovic-Sarkic, Dana Singleterry and Jairam Ramanathan from Oracle included screenshots of upcoming DVT components in their Building Visually Appealing Web 2.0 Data DashBoards. First we see a new Network Diagrammer:

As can be seen the component demonstrates the relationship between disparate nodes. This is incredibly useful for visualizing relationships in data. Another screenshot showing a different data relationship structure:

In terms of graphs Oracle is looking at a Treemap graph:

…and a Sunburst graph:

...both useful for showing hierarchical data visually. Of all the DVT controls the Timeline graph excites me most, something I’ve asked for in the past:

However I must clearly stress to readers these DVT controls are not in the current 11.1.2.1.0 release, and under Oracle’s safe harbor statement is not guarantying they will ever see be released (but fingers crossed anyway huh?).

Maven integration

As the ADF EMG moderator I’m involved in a lot of discussions in the community about the IDE and the framework. One hot topic is JDeveloper’s Maven support. 11.1.2.0.0 introduced the first cut of Maven support for the IDE, as discussed by Oracle’s Susan Duncan’s Team Productivity with Maven, Hudson and Team Productivity Center. This first slide shows the current Maven support:

Of more interest is the planned Maven features for 12c, which not only tells me Oracle is committed to Maven support, but also there are definitely limitations in the current implementation:

Most importantly here for me is the first 2 bullet points, which means I wont recommend to customers working with Maven until Oracle makes these available. Don’t get me wrong though, a couple years back there was no Maven support and it’s great Oracle is working to fill that gap completely.

What can Fusion Applications teach us about ADF?

Unlike OOW10, this year at Oracle Open World there was considerable more Fusion Applications demonstrations and presentations. This has been a boon as previously we’ve seen a lot of demos of dashboard-like-screens that while pretty don’t show us where the real work occurs for users. Fatema Madraswala from PwC and Rob Watson from Oracle included screenshots of the Fusion Applications Talent Management system (The very first Fusion go-live case study:

It’s curious to me that while Oracle has put a lot of effort into communicating the User Experience design effort put into Fusion Applications, then we see a screen that looks Oracle-Forms like, especially with it’s tabbed interface. In turn the worksheet at the bottom looks cluttered with buttons and fields. Yet with respect designing user interfaces for complex business systems is surely not easy.

I recommend ADF developers to search out as many Fusion Applications screenshots as possible as it reveals an insight into how to build the UI and what is and isn’t possible.

What about E-Business Suite?

EBS customers might feel the whole ADF/SOA bandwagon is passing them bye, what with the focus on Fusion Applications. Yet this year saw presentations tailor fitted to cover integrations points with EBS. I must admit I can’t really comment on the quality of the solutions as I have no direct experience with EBS, so I’ll leave experienced readers to make their own assessment. Check out the presentation entitled Extending Oracle E-Business Suite with Oracle ADF and Oracle SOA Suite from Oracle’s Veshaal Singh, Mark Nelson and Tanya Williams.

MetaData Services

As extension to the Fusion Applications demos, I’m detecting more down-and-dirty technical presentations on MedaData Services (MDS) where the framework can support personalizations and customizations. Gangadhar Konduri and a fellow Oracle colleague discussed the theory and demonstrated customizing a Fusion Applications module, with a focus to what technical people need to know. I must admit in the past I’ve been a little skeptical of MDS et all, not for it’s implementation but just the lack of information around on how to maintain and work with it from a developer/administrator point of view. However I’ll need to step back and reassess that opinion. You can read more in Gangadhar’s Managing Customizations and Personalization in Oracle ADF MetaData Services.

For ADF Experts

For the ADF experts who feel many of the presentations aren’t aimed at them, it’s well worth catching one of Steven Davelaar’s presentation. Steven who is the JHeadstart Product Manager at Oracle extends and pushes the ADF framework to its limits. His presentations often include large amounts of code where I discover new properties and techniques way beyond my current level of expertise. This year Steven presented Building Highly Reusable ADF Task Flows and Empowering Multitasking with an Oracle ADF UI Powerhouse for the ADF EMG (great title Steven ;-).

ADF Tuning

From my own perspective one of the most important presentations I attended was Oracle’s Duncan Mill’s ADF – Real World Performance Tuning presentation. As I now have several clients with production level ADF applications, my focus has moved away from the basics of creating ADF applications to architecture and performance. Duncan’s presentation aggregated a wide range of tuning hints into an easily digestible guide, highly valuable.

FMW Roadmaps

In a separate presentation entitled Certified Configurations of Oracle ExaLogic, Oracle Fusion Middleware, BI and Oracle Fusion Apps by Pavana Jain and Deborah Thompson from Oracle Corp, the future roadmap for FMW releases was revealed. Readers are reminded the safe harbor statement means Oracle doesn’t have to stick to what they present, so take the slides as guidelines only.

The first slide shows the approximate dates of each version:

The second slide reveals which 11g FMW products will be included in each release:

Some readers might find it curious why the 11g 11.1.1.X.0 series continues to at least 11.1.1.7.0 while there is already an 11.1.2.0.0 release of JDev. My understanding this is occurring because Fusion Apps will continue on the 11.1.1.X.0 series for some time yet thus extending the life of that branch.

Finally the third slide the same for the 12c FMW products:

Oh and the ADF EMG had a great event too

The ADF EMG also had a "super" Super User Group Sunday, but people are probably a little sick of me talking about it, so I'll just push you to a link instead.