BlazeDS Developer Guide

Channel and endpoint recommendations

Note: The HTTPChannel is the same as the AMFChannel behaviorally, but serializes data in an XML format called AMFX. This channel only exists for customers who require all data sent over the wire to be non-binary for auditing purposes. There is no other reason to use this channel instead of the AMFChannel for RPC-based applications.

If you are only using remote procedure calls, you can use the AMFChannel. The process for using real-time data push to web clients is not as simple as the RPC scenario. There are a variety of trade-offs, and benefits and disadvantages to consider. Although the answer is not simple, it is prescriptive based on the requirements of your application.

If your application uses both real-time data push as well as RPC, you do not need to use separate channels. All of the channels listed can send RPC invocations to the server. Use a single channel set, possibly containing just a single channel, for all of your RPC, messaging and data management components.

Servlet-based endpoints

Some servlet-based channel/endpoint combinations are preferred over others, depending on your application environment. Each combination is listed here in order of preference.

1. AMFChannel/Endpoint configured for long polling (no fallback needed)

The channel issues polls to the server in the same way as simple polling, but if no data is available to return immediately the server parks the poll request until data arrives for the client or the configured server wait interval elapses.

The client can be configured to issue its next poll immediately following a poll response making this channel configuration feel like real-time communication.

A reasonable server wait time would be one minute. This eliminates the majority of busy polling from clients without being so long that you're keeping server sessions alive indefinitely or running the risk of a network component between the client and server timing out the connection.

Benefits

Disadvantages

Valid HTTP request/response pattern over standard ports that nothing in the network path will have trouble with.

When many messages are being pushed to the client, this configuration has the overhead of a poll round trip for every pushed message or small batch of messages queued between polls. Most applications are not pushing data so frequently for this to be a problem.

 

The Servlet API uses blocking IO, so you must define an upper bound for the number of long poll requests parked on the server at any single instant. If your number of clients exceeds this limit, the excess clients devolve to simple polling on the default 3- second interval with no server wait. For example, if you server request handler thread pool has a size of 500, you could set the upper bound for waited polls to 250, 300, or 400 depending on the relative amount of non-poll requests you expect to service concurrently.

2. StreamingAMFChannel/Endpoint (in a channel set followed by the polling AMFChannel for fallback)

Because HTTP connections are not duplex, this channel sends a request to open an HTTP connection between the server and client, over which the server writes an infinite response of pushed messages. This channel uses a separate transient connection from the browser connection pool for each send it issues to the server. The streaming connection is used purely for messages pushed from the server down to the client. Each message is pushed as an HTTP response chunk (HTTP 1.1 Transfer-Encoding: chunked).

Benefits

Disadvantages

No polling overhead associated with pushing messages to the client.

Holding onto the open request on the server and writing an infinite response is not typical HTTP behavior. HTTP proxies that buffer responses before forwarding them can effectively consume the stream. Assign the channel's 'connect-timeout-seconds' property a value of 2 or 3 to detect this and trigger fallback to the next channel in your channel set.

Uses standard HTTP ports so firewalls do not interfere and all requests/responses are HTTP so packet inspecting proxies won't drop the packets.

No support for HTTP 1.0 client. If the client is 1.0, the open request is faulted and the client falls back to the next channel in its channel set.

 

The Servlet API uses blocking IO so as with long polling above, you must set a configured upper bound on the number of streaming connections you allow. Clients that exceed this limit are not able to open a streaming connection and will fall back to the next channel in their channel set.

3. AMFChannel/Endpoint with simple polling and piggybacking enabled (no fallback needed)

This configuration is the same as simple polling support but with piggybacking enabled. When the client sends a message to the server between its regularly scheduled poll requests, the channel piggybacks a poll request along with the message being sent, and the server piggybacks any pending messages for the client along with the response.

Benefits

Disadvantages

Valid HTTP request/response pattern over standard ports that nothing in the network path will have trouble with.

Less real-time behavior than long polling or streaming. Requires client interaction with the server to receive pushed data faster than the channel's configured polling interval.

User experience feels more real-time than with simple polling on an interval.

 

Does not have thread resource constraints like long polling and streaming due to the blocking IO of the Servlet API.

 


 

Send me an e-mail when comments are added to this page | Comment Report

Current page: http://livedocs.adobe.com/blazeds/1/blazeds_devguide/lcconfig_4.html