Showing posts with label Development. Show all posts
Showing posts with label Development. Show all posts

Tuesday, 23 June 2009

Stress & load testing web applications (even ADF & Apex) using Apache JMeter

A couple of years ago I presented Take a load off! Load testing your Oracle Apex or JDeveloper web applications at OOW and AUSOUG. I can't recommend enough the importance of stress testing your web applications, it's saved my bacon a number of times. Frequently as developers, we develop under a single user (developer) model where concurrency issues are easily avoided. When our programs hit production, with just 1 more user, suddenly our programs grind to a halt or fall over in bizarre places. Result, pie on developers' faces, users' faith in new technologies destroyed, and general gnashing of teeth all round. Some simple stress and load tests can head off problems way before they hit production.

(For the remainder of this post I'll infer "stress testing" and "load testing" as the same thing, though strictly speaking one tests for your application falling over, and the other how fast it responds under load)

So how to go about stress testing a web application?

There are numerous tools available to stress test web applications, paid and free. This post will look at the setup and use of Apache's JMeter, my tool of choice, mainly because it is free! ... to undertake a very simple stress test. Apache JMeter is available here, version 2.3.3 at time of writing.

On starting JMeter (<jmeter-home>/bin/jmeter.bat on Windows) you'll see the following:


Creating a Thread Group

From here what we want to do is set up a Thread Group that simulates a number of users (concurrent sessions), done by right clicking the Test Plan node -> Thread Group option. This results in:


As you can see the Thread Group allows us to set a number of threads to simulate concurrent users/sessions, loop through tests and more.

Creating HTTP Requests

From here we can create a number of HTTP requests (Test Plan node right click -> Add -> Sampler -> HTTP Requests) to simulate each HTTP request operation (Get, Post etc), HTTP headers, payloads and more. However in a standard user session between server and browser there can be a huge array of these requests and configuring these HTTP requests within JMeter would be a major pain.

Configuring the HTTP Proxy Server


However there's an easier way. Apache JMeter can work as a proxy between your browser and server and record a user's HTTP session, namely the individual HTTP requests, that can be re-played in a JMeter Thread Group later.

To set this up instead right click the Workbench node, Add -> Non-Test Elements -> HTTP Proxy Server:


To configure the HTTP Proxy Server do the following:

* Port – set to a number that wont clash with an existing HTTP server on your PC (say 8085)
* Target Controller – set to "Test Plan > Thread Group". When the proxy server records the HTTP session between your browser and server, this setting implies the HTTP requests will be recorded against the Thread Group you created earlier, so we can reuse them later
* URL Patterns to include – a regular expression based string that tells the proxy server which URLs to record, and those to ignore. To capture everything set it to .* (dot star). Be warned that during recording however, if you use your browser for anything else but accessing the server you wish to stress test, JMeter will also capture that traffic. This includes periodic refreshes by web applications such as Gmail or Google Docs that you don't even initiate; I'm pretty sure when replaying your stress test, Google would prefer you not to stress test their infrastructure for them; stick to your own for now ;-)

The end HTTP Proxy Server setting will look something like this:


You'll note the HTTP Proxy Server has a Start button. We can't use this just yet.

Configuring your Browser

In order for the JMeter HTTP Proxy Server to capture the traffic between your server and browser, you need to make some changes to your browser's configuration. I'm assuming you're using Firefox 3 in the following example, but same approximate steps are needed for Internet Explorer.

Under Firefox open the Tools -> Options menu, then Advanced icon, Network tab, Settings button which will open the Connection Settings dialog.

In the Connection Settings dialog set the following:

* Select the Manual proxy configuration radio button
* HTTP Proxy – localhost
* Port – 8085 as per the JMeter HTTP Proxy Server option we set earlier
* No Proxy for – ensure that localhost and 127.0.0.1 aren't in the exclusion list


The above setup makes an assumption that the server you want to access is accessibly without a further external proxy required.

Recording your HTTP session

Once the browser's proxy is setup, to record a session between the browser and server do the following:

1) In Apache JMeter hit the Start button on the HTTP Proxy Server page
2) In your browser enter the URL of the first page in the application you want to stress test

Thereafter as you navigate your web application, enter data and so on, JMeter will faithfully record each HTTP request between the browser in server against your Thread Group. This may not be immediately obvious, but expand the Thread Group and you'll see each HTTP request made from the browser to server:


As can be seen, even visiting 1 web page can generate a huge amount of traffic. Ensure to stop recording the HTTP session by selecting the Stop button in the JMeter HTTP Proxy Server page.

Configuring the Thread Group for replay

Once you've recorded the session in the Thread Group there are a couple of extra things we need to achieve.

For web application's that use Cookies and session IDs (JDeveloper's ADF uses a JSessionID for tracking sessions) to track each unique user session, we cannot replay the exact HTTP request sequence with the server through JMeter, as the session ID is pegged to the recorded session, not the upcoming stress test sessions.

To solve this in JMeter right click the Thread Group -> Add -> Config Element -> HTTP Cookie Manager. This will be added as the last element to the Thread Group. I usually move it to the top of the tree:


Next we need to configure the Thread Group to show us the results of the stress test. There are a number of different ways to do this, from graphing the responses, to showing the raw HTTP responses. In this post we'll take the later option.

Right click the Thread Group -> Add -> Listener -> View Results in Tree, which will add a View Results in Tree node to the end of the Thread Group:


Finally save the Thread Group by selecting it in the node tree, then File -> Save.

Running the Thread Group

To commence your first stress test run, it's best to leave the number of spawned sessions to 1, just to see the overall test will work in it's most basic form. The default Thread Group number of threads is set to 1, so there is no need to make a change to do this.

To run the test, simply select the Run menu -> Start. On running the Thread Group, you'll see the top right of JMeter has a little box that tells if it's still running, and the number of tests to go vs total number of tests:


Once the tests are complete, this indicator will grey out.

We can now visit the View Results Tree:


This shows the HTTP requests that were sent out and on selecting an individual request, you see the raw HTTP request and the actual response. You'll note the small green triangles showing a successful HTTP 200 result. If different HTTP errors occur the triangles show different colours. Also remember that sometimes application errors don't perculate up to the HTTP layer in your web application, so you should check your application's logs too (in the case of a JEE application, this will be your container's internal logs).

Running a Stress Test

The obvious step from here is to change the Thread Group number of threads to a higher number.

From here take time out to explore the other features in JMeter. It includes a wide range of features that in particular make it useful for regression testing.

Caveats

Firstly remember when doing this you're not only stress testing your application, your stress testing a server, potentially stress testing databases, stress testing your networks and so on. Therefore you can have an affect on anybody sharing those resources. "Hard core" stress tests should be on separate infrastructure, after hours, aiming for as little impact on those around you!

Also keep in mind, besides seeing your application fall over at 2 users, 10 users, 100 users, which is an important test, try to be realistic about your stress tests. Stress testing you're brand-new-application to a 1 million concurrent users is probably not being realistic. How many concurrent user requests do you really expect and what response times do you need? Normally when I ask managers this question they'll answer with, "oh we have 1000 concurrent users, the application must support that many at any one time". However what they really mean is the application has 1000 users, potentially all logged into the application (ie. sessions) at the same time, but not necessarily hitting the server with HTTP requests at any onetime.

(Later note: for readers interested in specifically testing JDeveloper's ADF, see this more recent post).

Thursday, 19 June 2008

JDev: why is Subversion preferred over CVS?

Thanks to Oracle's Susan Duncan at the ODTUG conference, I was reminded today why Subversion is the file version control of choice over CVS for JDeveloper. One of the main advantages is atomic commits.

As many JDeveloper developers will be aware, on making changes to something like an ADF Entity Object (EO) property via a wizard within an ADF project, a change may in fact result in modification of several files behind the scenes. When the developer decides to commit their changes to the file version control repository, the tool must submit all the files that changed for the ADF Entity Object to the repository for other developers to use.

However imagine the scenario where during the commit/submit of say 4 files representing the one ADF EO, the operation is successful for the first 2 files, but fails for the 3rd and 4th. We now have code within our repository that is inconsistent, some representing the earlier version of the EO, and other files representing our new version. This is going to cause major headaches for other developers who download the latest code changes from the repository, and get a mix of new and old files representing the one EO. In particular JDeveloper will probably not be able to handle this situation as there will be inconsistencies between the files. I must note this scenario is not particular to just JDeveloper, it can be an issue for all sorts of environments and is a common file version control system issue.

Unfortunately this issue is a real problem for CVS, but CVS's successor Subversion solves this through the use of atomic commits.

An atomic commit ensures that on a commit/submit, all the files are committed on success, or on any failure, all the changes are rolled back. With this feature we can guarantee the success of the overall action, or the complete failure, not some combination there of.

Thus this is why Subversion is a common recommendation for JDeveloper sites over CVS, and JDeveloper adopters should heavily consider picking Subversion for this reason.

To be clear, readers should not discount *other* file version control systems besides CVS and Subversion. You should in fact check if your tool of choice supports atomic commits among other features. From a quick Google I discovered there is a great site Better SCM that describes and compares the features of many file version control systems available, including atomic commits.

Monday, 7 April 2008

JDev ADF Code Reuse Facilities

Good programmers reuse code. Good frameworks promote code reuse. Oracle's ADF promotes code reuse through many different mechanisms, both in business services and the developed user interfaces.

This is a brief post, mostly a brainstorm for a current client, on the code reuse facilities available within JDev's ADF:

ADF Business Components
  • Custom Business Components Framework
  • Entity Objects - centralised validation, security, default create logic etc
  • View Objects - centralised SQL queries shared among pages
  • (11g+) View Object View Criteria - reusable VO SQL query predicates
  • Domains - reusable data types with own validation
ADF Faces/RC
  • Validators and Converters
  • (11g+) Tasks flows
  • (11g+) Declarative Components
  • (11g+) Page Fragments
  • (11g+) Page Templates
  • Skins
Readers are encouraged to obtain the JDev 11g ADF Guides and read sections "31 Reusing Applications Components" and "35.7 Working with Libraries of Reusable Business Components".

Monday, 25 February 2008

Web Service Development Practicalities

With a current client we've attempted to capture larger issues and practical development ideas beyond just the technical ABCs on implementing web services. We've published it here to hopefully be useful to the casual reader.

The following contains:

1. A brain dump of issues experienced in the past on web service projects that can blow out project delivery, affect the quality of the technical solution, and just plain frustrate users, developers and project managers alike. As web service projects bring in a 3rd party to supply part of the service, there appears to be an "error multiplier" (much like the military's concept of a "force multiplier") meaning that there is a greater chance of problems occurring.

2. A discussion on small ideas to help reduce the pain and build into your project plan and development beyond just building programs to consume/publish web services.

As this is a brain dump the following is certainly not a definitive guide, and not overly well structured or articulated article. In other words your mileage may vary.

Note that the discussion bends towards development from the Oracle database with mention of transactions, triggers, PL/SQL routines and the like. The discussion is also more considerate of the consumer, not publisher of web services - though both parties can take value in reading the points below to consider the other party's issues in using web services.

If you have anything to add, agree or disagree with any of the following points we'd appreciate in hearing your experience and ideas.

Development Issues

Documentation - 3rd parties publishing web services will often supply documentation on the web services to the new consumer, include descriptions of the WSDL files, their URL locations, the SOAP XML payloads and the business processes that result. New development teams should take care to confirm that the documentation supplied matches the actual services published as this is an early indicator that if they differ, the web service and external organisation are off the rails.

Network Connectivity - if you've ever sat at an organisation where you're frequently yelling at the network administrator that "I can't get to Google" or the XYZ sub-domain isn't accessible, chances are you're sitting on an unreliable network. Such network issues will become exasperating in a web services development project as you try and work out what's gone wrong this time, or are waiting once again for the network to come up.

Server Connectivity - as an extension of network connectivity, server connectivity and stability of the Application Server that publishes the web services is essential. If the web services reside on an App Server, Operating System or hardware box that goes up and down on a daily basis, it's pretty much guaranteed to go down during development.

Test Environment Verification - a good test web service needs to provide facilities beyond that of the production web service for development purposes, such that you can verify your transactions. Without this you may need to manually verify the transactions, or god-forbid actually phone somebody and check the results.

Test Environment Stability - it's common to provide a test web service environment in addition to the production web service servers. However there is a difference between providing a test environment for your own testing and development, and providing a test environment for other parties to test with; these should not be the same server. If you're currently using a published test web service from an organisation that is using that test service to do their own testing and development on, chances are you see that server go up and down, slow to a crawl, time out, change it's functionality, report garbage data, report no data, or the web service APIs will change.

Firewalls - typically at each end a firewall will exist between the consumer and publisher, and the firewalls will need to be configured to let web service traffic through. This may be both IP and Port blocking. As soon as you find out what needs to be configured in the development and test environments, request that the changes be made for the production environment so this isn't a roadblock for your production install. Because of red-tape and security constraints at your organisation and the web service publishers, this may be a long fight.

Solution Considerations and Practical Development

Error Handling - web service transmissions can fail at a number of different points and consideration on how to handle the error conditions needs to be applied. This may require flagging the unsent data and attempting to "resend" at a later time.

After a number of failed attempts for 1 or more messages the system should log the error in such a fashion that a human will be notified (either via email, in Oracle OEM logs etc). Once failed the system should stop attempting to resend and will not automatically restart but instead must be manually restarted. This requires the appropriate procedure for operations staff.

If the publishing server has been down for considerable time such that a number of waiting messages have accumulated, on the external service becoming available consideration should be given to only sending a batch of the waiting messages as to not flood the external server and network (and bring it down again via a DOS attack).

If your system fails after transmitting its payload the transaction needs to be written in such a manner that it doesn't rollback to a point where it thinks the message was not sent and doesn't erroneously send it again. Considered use of PL/SQL autonomous_transactions will take of this.

Soap Failures - the Soap web service protocol has an error reporting "fault" mechanism. Any custom code you deliver needs to be able to detect what is an expected response and what is an unexpected response, how to handle known faults or unknown faults and log them appropriately with as much detail as possible for debugging purposes. Though the Soap protocol defines a number of different soap fault responses and mechanisms to deal with these, as web services may be hand-crafted solutions you may see totally arbitrary error handling capabilities.

Network and Server Latency - the publishing server's response time can be variable dependent on network latency and server load. As such any solution should consider carefully not communicating to the external server within the same transaction that a human is part of; the wait time can become infuriating for the user. Instead write the message to be sent to a separate table/data structure then commit, with a separate independent process periodically searching for new records to be sent.

In particular be mindful of having database table triggers that call the web service routines to send messages. If a user undertakes DML on the table the operation may hang until the web service call is complete.

Test Utilities - as web services can fail at many different areas (connection, timeout, payload errors etc), it's prudent to write simple test programs that help diagnose these issues rather than depending on your final production code under development. Such utilities will help you debug and diagnose the issues without the bloat of your own code. If written properly such tools can be included in production solutions to detect issues as they occur and log the issue or notify the appropriate operational person.

Own Test Web Services
- if the external provider is having issues with providing a consistent test service, consider creating your own test web service based on the WSDL and XML payloads that they have published to keep your development going.

Service Level Agreements - care needs to be given to get the web service publisher to provide SLA on both the test and production environments. In particular uptime, not changing the specifications, not changing the business process, and notifications of system disruptions and potential future changes are essential. If you detect a casual response to this, be wary what this implies, a casual response or non concern of providing you the services.

Verification Utilities - the publisher of the web service could change their custom web service API at any time regardless of SLAs. Programs to detect changing WSDLs as well as fault handling in your programs to detect changing SOAP XML payload structures, along with reporting to a higher authority can initiate discussions about "what have you guys changed now?" rather than wasting time on "why isn't our program working now?"

Thursday, 18 October 2007

The zen of the Oracle development landscape

Grant Ronald has recently blogged about Modernising your Forms Applications - SOA or bust, which (once again?) revisits the perception (or myth?) that Java is the one and only future of Oracle's development. I'd like to revisit why I think this perception has come about.

I think one of the reasons (and I emphasis the word one here, there are certainly more) Oracle developers look to Java and therefore JDeveloper by default as the Oracle development future is a matter of circumstances that eventuated in the past.

Around 1998 through 2004 Java was the buzz in the development industry and it was pretty important change time for the industry. Languages were revolutionized, the web was coming to the fore, and Java is famous for being the first cab off the ranks so to speak in the Web programming world (thanks to its servlet technology - which it is now partly infamous for).

I believe many Forms programmers came to the conclusion that Forms wasn't the future because of its clunky 2 tier architecture (later hammered into 3 tier) and plain-Jane interfaces. As such they were interested in what else was happening out there way back when. It just happened that Oracle invested in JDeveloper at the time and through Oracle's marketing, coupled with the buzz around Java, it gained popularity and became the perceived future of Oracle development. Forms programmers picked up on this fact and stored it away in their little box of tricks.

Then the world moved on.

Today those same typical Forms programmers are facing the following problems:

1) The majority of Forms systems are now legacy and it has taken them a long time to come around to adopting new technologies for various reasons regardless of their originally interest in other technologies, due to organisational lack of mobility in technology adoption, lack of in-house skills, lack of interest by management to move on and so on - take your pick. So their initial interest in other technologies has been stalled by the slow moving IT corporate world. It's hard to move to a new technology when you have to fix current problems in existing legacy systems.

2) There has been an incredible amount of change in the industry between 2000 and 2007 (as usual in the IT industry). Traditional Forms and PL/SQL Oracle programmers are outside the whole web world, revolution in web scripting languages, scratch their head at the term Ajax (it's something Google does isn't it?), have very little exposure to industry wide frameworks (as separate to inhouse frameworks), and so on. Keeping up with all this change is a full time job, much easier to keep the blinkers on, do what your job demands you to do, and just keep with what you learned way back when.

3) The perception that development of Forms from Oracle has stalled.

4) A potential alternative Oracle Application Express (Apex - formally HTML DB) has only appeared to have become (again a perception thing, not necessarily reality) a viable mature development alternative more relatively recently.

....that because of these issues and perceptions (and I can't emphasis the word perception for this discussion enough - don't start arguing that's what I believe please), because typical Forms programmers haven't kept up with all the changes, because Forms is an old technology, because the marketing has at times focused on Fusion and JDeveloper and not Apex....

....that because of all these perceptions.... and how history eventuated.... and given a reluctance to give up on the potentially false or outdated perception learned way back when, that Java is still the only way to go....

.... that we see Forms programmers coming back again and again to thinking JDeveloper/Java is the future of Oracle development, and then becoming terribly disillusioned when they struggle with Java, JDev and ADF, can't see why the huge frameworks don't fit into their simple problem sets, struggle with the huge learning curve of adopting not one but several new technologies, and see an easier alternative in Apex, or scripting languages, or .Net or take whatever your pick in what you're more familiar with (it's always easier to say technology X is better than Y when you know X, but you don't know Y - that's human psychology for you).

Now given this whole discussion, does this mean I think Java and JDeveloper don't have a viable future for Oracle development? Not at all. For myself I've overcome the JDeveloper learning curve moving from Forms and I'm very excited about the future of the product. The rich AJAX components in JDev 11g ADF Faces Rich Client has me jumping up and down in excitement .... I can't believe I don't have to do hardly any JavaScript programming (don't get me started Apex programmers) at all to get these great AJAX enabled components in my web application .... Web 2.0 here we come.

For you and your organisation, like Grant says in his blog, to paraphrase, there are a number of ways to skin a cat, and what technology you pick should be dependent on your circumstances, or more precisely your organisation's circumstances. Don't invest in one of these technologies before understanding your organisation's circumstances or you will get burnt. For example investing in a huge Java project with just PL/SQL programmers without any Java training or experience will certainly burn you unless you're very lucky. And you should have known that fact before you start. That's the risk of falsely thinking Java is the holly grail of development. The same holds true for investing in an Apex project, a scripting language project and so on, there is no holly grail in development, particularly if you have none of the needed skills or the tool is badly suited to your environment. And for the record (you can quote me) there will never be a holly grail (unless you consider turning all the computers in the world off) – so get over it (the exception being of course Lisp ;).

So take Grant's point on board. Java and JDeveloper match certain problem sets and backgrounds. As does Apex as does Forms..... let your mind free itself from what you learned before and re-assess the Oracle development landscape today, to what suits your needs.

....and thus the title of this post.

Now, I seem to have broken my soapbox. Until I find another one, I'll keep the blog free of rants for a while. I think it's a time for a humorous post. Maybe I'll pick on DBAs or something fun.

Usual disclaimers to stop the unnecessary flames:

1) Please note I'm not trying to put all Forms programmers in one outdated boat. There is always a bell curve of people and skills; people who are as much as in the box as out, so put yourself in whatever box makes you happy for this discussion. When I say "typical" Forms programmers I'm drawing from my experience as a consultant and I'm referring to a generalisation of the Forms programmers I'm meeting on a day by day basis, not a specific person or group. There are certainly Forms programmers who I meet who know everything outside the Forms sphere too.

2) It's a false perception that Forms development from Oracle has stalled, as thanks to Grant's blog we can see there is still changes occurring in the Forms arena, just more subtle than before.

3) For the readers of one of my original posts A career path for Oracle developers - consider JDeveloper!, you will certainly be able to see a certain maturing in my thoughts about this, thanks to many discussions with Apex specialists, JDeveloper experts and other contemporaries.