You can tune these cluster properties for best performance.
Heartbeat detection determines how often the cluster’s name servers test whether each server is online. When the name server detects a server has gone offline, it stops directing IIOP clients to that server.
Heartbeat detection affects only IIOP clients and interserver calls. If you partition components, that is, you do not install all components into every server, interserver calls are required when a component calls another components that is not available on the same server. In these cases, enable heartbeat detection and tune the test interval. A shorter interval minimizes the chance that clients attempt to connect to servers that have gone offline, but if the interval is too short, you can waste resources with excessive broadcasting from the name servers to the member servers. The default of two minutes works well for most applications.
If your application does not have any IIOP clients or use interserver calls, you can disable heartbeat detection in EAServer. (Note HTTP client load balancing and failover are performed outside of EAServer.)
To change these setting in EAServer Manager, follow the instructions in “Heartbeat detection” in Chapter 6, “Clusters and Synchronization,” in the EAServer System Administration Guide. To change this setting with jagtool, use the set_props command to set these properties for the primary server:
Synchronize the cluster after modifying these settings.
EAServer supports several algorithms to balance the IIOP client load between servers in the cluster. The EAServer name service uses the specified algorithm to determine which server each client connects to when the client resolves the component name. For more details, see “Understanding load balancing” in Chapter 7, “Load Balancing, Failover, and Component Availability,” in the EAServer System Administration Guide.
These settings affect only applications that use IIOP clients or that require inter-server calls between cluster members. The settings do not affect Web applications, since HTTP client load balancing is done outside of EAServer.
You can configure the load balancing policy to ensure the IIOP client load is evenly distributed. You can also change connection settings in your client programs to help ensure an even load distribution, as described in “IIOP client settings that affect load balancing”. EAServer supports these distribution policies:
Random weighted Static, even distribution of naming requests using a random selection algorithm to map name requests to destination servers. The load is likely to balance evenly over time, but can vary due to the random nature of the distribution algorithm and the fact that some components load the server more heavily than others.
Round-robin Static, even distribution of naming requests using a round-robin selection algorithm to map name requests to destination servers. The load is likely balance evenly over time, but can vary due to the fact that some components load the server more than others.
Weighted Same as random, but the selection is weighted using the weights you assign to each server. Over time, each server carries a portion of the load in proportion to the weight that you assign to each server. Use this algorithm if some machines in the cluster can support more clients than others.
Adaptive Same as random, but the selection is weighted using weights that are calculated based on a sampling of each server’s existing load. This policy provides the highest assurance that the load will balance evenly across servers at any time. However, the broadcasting and collection of sampled load data does add slight overhead.
To configure these settings in EAServer Manager, follow the instructions in “Configuring load balancing” in Chapter 7, “Load Balancing, Failover, and Component Availability,” in the EAServer System Administration Guide. To configure these settings with jagtool, use the set_props command to set these properties for the cluster:
Synchronize the cluster after modifying these settings.
Using partitioning You can also further balance the load by partitioning components and Web applications between different logical servers. For example, you might install your Web application in the logical server Jaguar1, using this server name to start with this configuration on two machines in the cluster, and install the packages containing your components in the logical server Jaguar2, using this server name to start the Jaguar2 configuration on four machines in the cluster. A drawback of this configuration is that component invocations from the Web tier and intercomponent calls can require interserver communication over the network, which is slower than in-server invocations and prevents the use of some optimizations such as EJB local interfaces.
Copyright © 2005. Sybase Inc. All rights reserved. |
![]() |