Pessimistic Predicate/Transform Model for Long Running Business Processes

Share Embed


Descripción

TSINGHUA SCIENCE AND TECHNOLOGY ISSN 1007-0214 03/21 pp288-297 Volume 10, Number 3, June 2005

Pessimistic Predicate/Transform Model for Long Running Business Processes* WANG Jinling (ฅࠦঔ)**, JIN Beihong (ࠡᏙ‫)܂‬, LI Jing (ह

࠸)

Institute of Software, Chinese Academy of Sciences, Beijing 100080, China Abstract:

Many business processes in enterprise applications are both long running and transactional in

nature. However, no current transaction model can provide full transaction support for such long running business processes. This paper proposes a new transaction model, the pessimistic predicate/transform (PP/T) model, which can provide full transaction support for long running business processes. A framework was proposed on the enterprise JavaBeans platform to implement the PP/T model. The framework enables application developers to focus on the business logic, with the underlying platform providing the required transactional semantics. The development and maintenance effort are therefore greatly reduced. Simulations show that the model has a sound concurrency management ability for long running business processes. Key words:

long duration transaction; extended transaction model; transaction processing middleware

Introduction In many enterprise applications (such as the mortgage processing system and the insurance system), a business process can run for several hours or even longer. At the same time, these long running business processes (LRBP) should have the same transactional properties as short running business processes. The existing transaction processing (TP) systems (such as transaction processing monitors and database management systems) mainly serve business processes that last only for a very short time (e.g., several milliseconds), and result in serious performance degradation if applied to long running business processes. Therefore, a new type of transaction processing middleware is required to support long Received: 2004-06-04; revised: 2004-11-22 γ Supported by the National Key Basic Research and Development (973) Program of China (No. 2002CB312005), and the National High-Tech Research and Development (863) Program of China (No. 2003AA115440)

γγ To whom correspondence should be addressed. E-mail: [email protected]; Tel: 86-10-62553708-5106

running business processes, which should meet the following requirements: 1) Ensure that long running business processes have the same transactional properties as short running business processes. 2) Enable application developers to focus on the business logic, with the underlying platform providing the required functions to support the transactional semantics. 3) Automatically solve concurrency conflicts between LRBPs, and roll back an LRBP without human participation. 4) Ensure that long duration transactions are more likely to succeed than the traditional short transactions (because long duration transactions may have run for many steps and have a higher cost of failure). Additionally, since many applications must deal with both long running business processes and short running business processes, the system should be built on existing TP systems, and should not require a dramatic change in the underlying TP system. Much research has been carried out on the long duration transactions, and numerous long duration transaction models have been proposed. None of them,

WANG Jinling (ฅࠦঔ) et alġPessimistic Predicate/Transform Model for Long Running ĂĂ

however, can fully meet the above requirements. In this paper, we propose a new transaction model, the pessimistic predicate/transform (PP/T) model, which can meet the requirements listed above. The model incorporates four enhancements into the standard transaction model: sub-transactions, multiple versions, the semantics of transactions, and the semantic constraints on the database states. In comparison with other long duration transaction models, the main advantage of our model is that it uses a PP/T concurrency control mechanism. The PP/T mechanism combines the predicate/transform mechanism with semantic constraints on the database states, so that it can ensure a very low failure rate of long duration transactions. Based on the PP/T model, we have designed a framework on the enterprise JavaBeans (EJB)[1] platform to support long duration transactions. The framework allows application developers to focus on the business logic, with the underlying platform providing the required transactional semantics. The development and maintenance effort can therefore be greatly reduced.

1

Related Work

Much research work has been done on long duration transactions. One of the most influential long duration transaction models is the saga model[2] proposed by Garcia-Molina. The basic idea of this model is to decompose each long duration transaction into a set of sub-transactions that can be executed separately. Each sub-transaction is a traditional transaction. Compensating transactions are used when the long duration transaction needs to be rolled back. Under the saga model, when it is necessary to roll back a long duration transaction, not only should the executed sub-transactions be compensated, but also all other transactions that either directly or indirectly used the intermediate results of the long duration transaction must be compensated[3]. Since the set of transactions that used the intermediate results of a long duration transaction cannot be predicted in advance, the compensating transactions can hardly be developed beforehand, and the compensation process must rely on human participation in most cases. To improve atomicity and isolation for long duration transactions, many other long duration transaction models (e.g., Refs. [4-6]) use a check-out/check-in concurrency control mechanism. These models are

289

mainly used in the computer-aided design (CAD), computer-aided software engineering (CASE), and strategic information system (SIS) areas. The basic idea of the check-out/check-in mechanism is as follows: When a long duration transaction wishes to operate on certain data in the database, it first checks-out the data from the database into its own local data space, and performs the operation on the data in its local data space. When the entire long duration transaction is finished, it checks-in all the data in its local data space into the database. During the execution period of a long duration transaction other transactions may have changed the data in the database, so that when a long duration transaction checks in its data, the data in its local data space and the data in the database must be integrated into a unified version. The main disadvantage of the mechanism is that it lacks formal definitions, so the correctness criteria cannot be characterized mathematically. The users are responsible for solving conflicts and correcting errors, which may in fact be too difficult for them. Therefore, this approach is not suitable for many enterprise applications. The NT/PV model[7] proposed by Korth et al. combines the semantic knowledge of transactions with multiple version and nested transaction techniques to support long duration transaction. The model provides strong ability to express complex interactions, and proposes a set of correctness criteria for concurrency scheduling. However, the model also uses compensating transactions to roll back long duration transaction, requiring each sub-transaction to have a corresponding compensating transaction, which is impractical for many real-world applications. The LRUOW model[8] proposed by Bennett et al. supports two types of concurrency control mechanisms, the most noteworthy being the predicate/transform mechanism. The main idea behind the predicate/transform mechanism is to postpone execution of the actions in a long duration transaction until the committing time of the transaction, and then execute these actions as a batch job. A long duration transaction can therefore be converted into a traditional short transaction and can therefore have the same atomicity, consistency, isolation, and durability (ACID) semantics as traditional transactions. For example, in a typical Internet shopping application, after a customer chooses a product and inputs the purchasing amount, the web application just saves the ordering information

290

Tsinghua Science and Technology, June 2005, 10(3): 288̢297

in the customer’s http session, rather than updating the database immediately. Only after the customer has chosen all desired products and completed the paying process, will the web application actually change the database. The customer’s shopping process can be considered as a long duration transaction, in which the concurrency control mechanism is predicate/transform. The main disadvantage of the predicate/transform mechanism is that it cannot ensure priority of a long duration transaction when it conflicts with traditional short transactions, even though the cost of failure of a long duration transaction is usually much higher than that of short transactions. Moreover, it cannot notify the user of the failure of a long duration transaction in a timely manner. For example, in the preceding Internet shopping application, after the customer finishes ordering a product but before the transaction is completed, another transaction may change the stock of the product to an amount lower than the customer’s request. The customer is not aware of the change, and may continue to order other products, not knowing the failure of the shopping session until he/she tries to finish it.

2

Concept of the PP/T Model

2.1

Definition of long duration transaction

Our model takes the same approach as the saga model and defines the long duration transaction as a set of sub-transactions: Definition 1 A long duration transaction is defined as the following tuple: l = (T, ĺ) where T = {t1, t2, …, tn} is the set of steps that comprise the long duration transaction. The operator ĺ denotes a partial order on T that should be satisfied in the execution of the long duration transaction. We assume that the concurrency-scheduling algorithm of the underlying traditional TP system is serializable, so an execution of the long duration transaction will form a total order on T. An execution history of a long duration transaction is denoted as t1 D t2 D ... D tn. We give the following definition for each step of a long duration transaction:

Definition 2 A step of a long duration transaction is defined as the following tuple: ti = (pi, fi). In the definition, pi is the precondition of ti, and fi is the actual operation on the database. The predicate pi should be the sufficient condition for the success of ti on the database state S, i.e., pi(S) Ÿ the success of ti. When ti is executed alone, it can be treated as a transition of the database state. So we can define fi as a transition function on the database state: S2 = fi (S1). In the above expression, S1 is the database state before the execution of ti, and S2 is the database state after the execution of ti. 2.2

Multiple versions of database state

We use the multiple version technique to achieve isolation of different long duration transactions when they are executing concurrently. Every long duration transaction has its own local data space where the data inside is invisible to other transactions. In the rehearsal phase of a long duration transaction, when step ti wants to change data in the database, it copies the data from the database to its own local data space and changes the data inside the local data space, while the data in the actual database remain unchanged. Operations are not actually executed on the database until the performance phase. We call the state of the actual database the “global database state”. In the rehearsal phase, each long duration transaction can only see its own version of the database state, not the global database state. The version of the database state seen by a long duration transaction is a combination of the global database state and the state of the local data space of the transaction. We use function LDi(l) to denote the state of the local data space at the beginning of step ti in a long duration transaction l. LDi(l) includes the states of data changed from t1 to ti1. LD1(l)=‡. We use function Si(l) to denote the global database state at the beginning of step ti, and function Vi(l) to denote the version of database state seen by l at the beginning of step ti. Vi(l) is synthesized as the following: Vi(l) = LDi(l) override Si(l). The operator override means that if a data object

WANG Jinling (ฅࠦঔ) et alġPessimistic Predicate/Transform Model for Long Running ĂĂ

exists in the first operand, its value is retrieved from the first operand; otherwise, its value is retrieved from the second operand. After the execution of ti, the state of the local data space changes from LDi(l) to LDi+1(l), but the global database state remains unchanged: fi(Vi(l)) = LDi+1(l) override Si(l). In the period between the end of ti and the beginning of ti+1, other transactions may change the global database state from Si(l) to Si+1(l). Therefore, at the beginning of ti+1, the database state visible to l (i.e., Vi+1(l)) is once again synthesized from LDi+1(l) and Si+1(l). 2.3

PP/T concurrency control mechanism

For step ti in a long duration transaction to be successfully executed on database state S, pi must hold true on S. Therefore, for a long duration transaction to be successfully committed, the database state in the performance phase should satisfy the predicate of every step. However, the predicate/transform mechanism cannot ensure this. To overcome the disadvantage of the predicate/transform mechanism, it is helpful to take a look at a business process in a banking application. Assume a customer requests the bank to buy $1000 worth of bonds for him. The bank will forward the request to a stockbroker, but the result of the deal will not be known until the next day. For fear that the customer may withdraw the money from his/her account in the meantime, the bank will freeze part of the amount in the customer’s account, i.e., although the balance of the account is X, the amount that the customer can draw out is X  1000 (X • 1000). Such a constraint on the data object is like a “lock” in the semantic layer. It does not prohibit access of other transactions to the data object, but does require that if other transactions change the value of the data object, the new value must satisfy the constraint condition. Therefore, it can ensure that the LRBP will be successfully completed. Our PP/T concurrency control mechanism is based on this practice. In the PP/T mechanism, a constraint table is set up for the database that includes all constraints that a consistent database state should satisfy. These constraints are like “semantic locks” on the data objects. When a long duration transaction finishes each step in the rehearsal phase, it puts the precondition of the step into the constraint table. After that, when any transaction prepares to commit, the system firstly

291

checks whether the new database state satisfies all constraints in the constraint table. If any constraint is not satisfied, the transaction is not allowed to continue. Therefore, in the performance phase of the long duration transaction, the preconditions of every step are satisfied, so the execution of a long duration transaction can be successfully completed. We call the original predicate/transform mechanism an optimistic predicate/transform mechanism (since it does not exert any constraints on the database state), and call the new constraint-based predicate/transform mechanism a pessimistic predicate/transform mechanism. In the rehearsal phase of a long duration transaction, for step ti to be successfully executed, pi must be satisfied on the transaction’s own version of database state, i.e., pi(Vi(l))=true. Since Vi(l) is synthesized from LDi(l) and Si(l), we can divide pi into two parts: one part involves the data in LDi(l), denoted as pLDi ; the other part does not involve the data in LDi(l), and is denoted as psi . We can reach the following conclusion: pi(Vi(l)) Ÿ psi (Si(l)). Therefore, we can add psi into the constraint table of the database, so that later transactions will not violate the constraints. However, we cannot conclude pLDi (Si (l )) true from pLDi (Vi (l ))

true, so we have to deal with pLDi

specially. Agrawal et al.[9] have proposed the idea of a wp function that is helpful in solving this problem. The following is the definition of the wp function: Definition 3 The function wp(t, p) is the weakest condition that should be held on the database state before the execution of t, so that predicate p is true after the execution of t, i.e., wp(t, p)(S) Ÿ p(t(S)). According to Definition 3, we can reach the following conclusion: wp(t1 D … D ti1, pLDi )(S1(l)) Ÿ pLDi (Vi(l)). The predicate wp(t1 D … D ti1, pLDi ) is the weakest condition that should be held on S1(l) to ensure pLDi (Vi (l )) true. Therefore, we can firstly check whether predicate wp(t1 D … D ti1, pLDi ) is true on the current global database state Si(l). If it is not true, then step ti cannot be successfully executed, and the system will return an error message. If the predicate is true, the system will put the predicate into the constraint table, so that later transactions will not violate the constraints.

292

Tsinghua Science and Technology, June 2005, 10(3): 288̢297

Unfortunately, the process of computing wp(t1 D … D ti1, pLDi ) from pLDi may be very difficult (some-

a case, the PP/T mechanism is still an improvement over the original predicate/transform mechanism, though it cannot strictly ensure the successful execution of all long duration transactions in the performance phase.

phase of the long duration transaction will fail. 3) Delete the local data space of the long duration transaction after all steps are successfully completed. To support long duration transactions, the traditional transaction manager must be changed slightly. In the committing phase of a traditional short transaction, the traditional transaction manager should check whether the new database state satisfies all constraints in the constraint table. If all constraints are satisfied, the transaction can continue as usual; otherwise, the transaction should be rolled back.

2.4

2.5

times even impossible) in practice. Therefore, we can weaken the PP/T mechanism to just recording psi in the constraint table, and not dealing with pLDi . In such

Behavior of the transaction manager

For a traditional TP system to support the PP/T model, we can add a new component called long transaction manager into the TP system. The long transaction manager takes charge of the management and execution of long duration transactions, while the traditional transaction manager takes charge of the management and execution of traditional short transactions. In the rehearsal phase of a long duration transaction, when a user requests the system to perform step ti, the long transaction manager takes the following actions (all these actions are encapsulated into a traditional short transaction): 1) Check whether pi is true with the transaction’s version of database state Vi(l). If pi(Vi(l)) is false, the execution fails and error messages will be returned. 2) Record ti and relevant parameters in the operation log. 3) Execute ti on the transaction’s local data space. All changed data are recorded in the transaction’s local data space, not in the actual database. 4) Put psi into the constraint table of the database. 5) Optionally, put wp(t1 D … D ti1, pLDi ) into the constraint table of the database. When the user commits a long duration transaction, the long duration transaction goes into the performance phase. In this phase, the long transaction manager takes the following actions (all these actions are encapsulated into a traditional short transaction): 1) Clear out all the constraints from the constraint table that are added by the current long duration transaction. 2) Execute each step on the actual database according to the operation log. Before execution of each step, the predicate of the step is checked with the current database state. If the predicate is false, the performance

Recovery mechanism

In the PP/T model, the local data space of the long duration transaction is persistent so that the recovery mechanism is relatively simple. There are two kinds of the recovery mechanism: backward recovery and forward recovery. Backward recovery (i.e., rollback) means undoing the effect of executed steps of a long duration transaction when a failure occurs. Forward recovery (e.g., recovery after the system crash) means recovering the state of a long duration transaction to the most recent state so that the transaction can be resumed. When a failure occurs, an unfinished long duration transaction may be in one of the following states: 1) At the interval between two steps, i.e., the previous step has finished and the next step has not yet begun. At this stage, since all intermediate information is preserved in the local data space and the operation log, the system need not do any work for forward recovery. For backward recovery, the system must delete the local data space of the transaction and clear out all the constraints that have been added by the transaction from the constraint table. 2) At the executing process of a step in the rehearsal phase. Since the executing process is encapsulated into a traditional transaction, any recovery work at this step will be done by the recovery mechanism of the underlying traditional TP system. After that, any other recovery steps are the same as stated in 1). 3) At the executing process of the performance phase. Since the executing process is encapsulated into a traditional transaction, any recovery work at this phase is done by the recovery mechanism of the underlying traditional TP system. After that, any other recovery steps are the same as stated in 1).

WANG Jinling (ฅࠦঔ) et alġPessimistic Predicate/Transform Model for Long Running ĂĂ

3.1 2.6

Extension to the JTA programming model

Discussion

1) Local data storage In the PP/T model, every long transaction has a local data space. If a long transaction relates to many tables and touches upon many rows, it needs to replicate almost the whole database, which is very inefficient. However, for a business process in an enterprise application, the reasons for a long running time are usually not the heavy data processing task, but human interactions or the need to wait for other processes. Therefore, we believe that the PP/T model is suitable for most long running business processes in enterprise applications. 2) The concurrent access of the constraint table The PP/T model uses a constraint table to achieve the pessimistic predicate, and each transaction must check this table. Since the traditional transactions just reads the table and only long duration transactions change the table, we believe that it does not become a performance bottleneck if the number of long duration transactions is considerably smaller than the number of traditional transactions. 3) The isolation mode of transactions In terms of the structured query language (SQL) standard, there are four types of isolation levels for transactions: readuncommitted, read-committed, repeatable-read, and serializable. In the PP/T model, to achieve the highest isolation level, many data would need to be replicated into the local data space of long duration transactions, which would greatly degrade the performance of the system. Therefore, a more practicable way is to support only the read-committed isolation mode for long duration transactions.

3

293

The transaction service in the EJB platform is based on the Java Transaction API (JTA)[10] programming model. To support long duration transaction on the EJB platform, the JTA programming model should be extended. In our framework, we mainly add the following interfaces: 1) LongTransactionService: used to create a new long duration transaction, or to get a reference to an existing long duration transaction according to its identifier. 2) LongTransaction: representing a long duration transaction and providing the operations related to the transactional semantics, such as begin(), commit(), rollback(), suspend(), and resume(). 3) Constraint: representing a constraint on the database state. The application developer should provide the specific implementation classes. The interface also provides methods to indicate which data objects are involved in the constraint. 4) ConstraintTable: representing the constraint table on the database, including all constraints on the current database state. The interface also provides methods to get the set of constraints that are related to a certain data object. In the extended JTA programming model, the relationships between all parts involved in transaction processing are shown in Fig. 1.

Implementation Framework on the EJB Platform

To prove the practicability of the PP/T model, we have designed an implementation framework on the EJB platform. In the EJB platform, the entity beans can be used to encapsulate the data objects in the database system. Therefore, if we encapsulate all data objects into entity beans, we can check whether a database state satisfies a constraint in the application server layer without relying on the support of an underlying database management system. This is the primary reason for choosing the EJB platform as the underlying TP system.

Fig. 1 Relationships between all parts involved in the extended JTA model

3.2

Implementation of multiple versions of data objects

Bennett et al.[8] have introduced a technique to create multiple versions of a data object on the EJB platform. The technique is based on the facade and bridge

294

Tsinghua Science and Technology, June 2005, 10(3): 288̢297

pattern[11], and the set of the version objects are implemented as entity beans. For every data object that needs to create multiple versions, the application developer should provide an interface and an implementation; the framework will then automatically generate the facade class and the version class. The class structure is shown in Fig. 2.

Fig. 2

Implementation of multiple versions of data object

Fig. 3

3.3

To implement the PP/T concurrency control mechanism, the framework provides a component called DataObserver, which records the changed data objects in every long duration transaction. The design of the component is based on the observer pattern[11]. It collects the needed information by monitoring the invocation of ejbCreate(), ejbStore(), and ejbRemove() methods of all entity beans. When a traditional transaction commits, the traditional transaction manager will query the DataObserver component to get the set of data objects that are changed by the transaction. For every changed data object, the transaction manger will query the constraint table to get the constraints related to it and validate those constraints. The committing process will not succeed until all related constraints are satisfied. The committing process is shown in Fig. 3.

Committing process of a traditional transaction

4.1

4

Implementation of PP/T concurrency control mechanism

Performance Evaluations

To evaluate the proposed PP/T model, we have implemented the model in a simulation environment and compared the simulation results with the LRUOW model. Currently, there are no widely accepted benchmarks for long duration transactions. Instead, we generated our own test data and observed the corresponding performance metrics.

Simulation environment

In the simulation experiments, X traditional transactions and Y long duration transactions were generated and executed in 20 min, so that we could observe the performance of the PP/T model and the LRUOW model under a variety of simulated loads. The simulation program was developed in Java and was executed on a common desktop PC with an Intel Pentium IV CPU at 1.5 GHz and 256 MB RAM running Windows 2000 Professional and JDK 1.3.1. Our simulation scenario was a banking business

WANG Jinling (ฅࠦঔ) et alġPessimistic Predicate/Transform Model for Long Running ĂĂ

system. We suppose that there are N accounts in the database, the initial balance of each account being $5000.00. Each account is allowed two kinds of operation: depositing and withdrawal. The balance of each account cannot be lower than zero. If an operation causes the balance of an account to be lower than zero, the operation fails. The executing time of each operation was set to 5 ms. We suppose that each traditional transaction was a transferring account process, including a depositing operation on an account and a withdrawal operation on another account. The transfer amount was randomly generated in the zone of (0, max_amount). If one of the operations in a transaction failed, the transaction was rolled back. The beginning time of each traditional transactions was randomly generated in the period from 0 to 20 min. We suppose that each long duration transaction was a kind of banking business process that needed the coordination of several people and spanned several minutes. For example, when a customer requires the bank to draw a bank draft for him, several steps should be performed by the bank clerks, such as recording the ledger, reviewing the ledger, getting authorization from the manger, inputting the draft information, reviewing the draft information, etc. In each step, a bank clerk executes a traditional short transaction. In our experiments, we suppose that each long duration transaction was composed of 5 sub-transactions, and each sub-transaction was a transferring account business process. The duration time of each long duration transaction was set to 3 min, and the beginning times of the sub-transactions were randomly generated in the duration period of the long duration transaction. The beginning time of each long duration transaction was randomly generated in the period from 0 to 17 min. 4.2

Performance metrics

In the simulation experiment, we use the failure rate of long duration transactions to evaluate the performance of the PP/T model and the LRUOW model. The failure rate is the proportion of failed long duration transactions in all of the long duration transactions, reflecting the concurrency management ability of the long duration transaction models. In the PP/T model and the LRUOW model, the failure rate of long duration transactions may be affected by the following factors:

295

1) The transfer amount. The amount was randomly generated in the zone of (0, max_amount). As the value of max_amount increases, the withdrawal operations on accounts are more likely to fail, so the failure rate of long duration transactions will be larger. The transfer amount therefore influences the probability of semantic conflicts between transactions. 2) The number of accounts, i.e., the number of data objects shared by all transactions. This parameter influences the probability of concurrency conflicts of reading and writing operations between traditional transactions. 3) The number of traditional transactions. This parameter influences the probability of concurrency conflicts of reading and writing operations between traditional transactions. 4) The number of long duration transactions. This parameter influences the probability of concurrency conflicts between long duration transactions. 4.3

Simulation results

In the experiments, we took the maximum transfer amount, the number of accounts, the number of traditional transactions, and the number of long duration transactions as variables in turn, so that we could observe the failure rate of long duration transactions under different conditions. For each set of parameters, we executed the simulation program 30 times and used the average result as the final simulation result. Figure 4 shows the influence of the maximum transfer amount on the failure rate of long duration transactions. The experimental parameters were as follows: the maximum transfer amount was varied from $250.00 to $450.00, the number of accounts was 200, the number of traditional transactions was 60 000, and the number of long duration transactions was 300. From Fig. 4, we can see that the failure rate for the LRUOW model increased rapidly (from 4.71% to 19.05%) with the increase of the maximum transfer amount. By contrast, the failure rate for the PP/T model increased very slowly (from 1.46% to 5.82%). This phenomenon could be explained by the fact that the PP/T model makes use of semantic constraints on database states to improve its concurrency management ability, so with increasing probability of semantic conflicts of transactions, the advantage of the PP/T model becomes more and more obvious.

296

Tsinghua Science and Technology, June 2005, 10(3): 288̢297

Fig. 4

Influence of maximum transfer amount

Figure 5 shows the influence of the number of accounts on the failure rate of long duration transactions. The experimental parameters were as follows: the number of accounts was decreased from 300 to 100, the maximum transfer amount was $350.00, the number of traditional transactions was 60 000, and the number of long duration transactions was 300. From Fig. 5 we can see that the failure rate increased more rapidly for the LRUOW model (from 8.13% to 17.70%) than for the PP/T model (from 2.35% to 6.60%).

Figure 7 shows the influence of the number of the long duration transactions on the failure rate of long duration transactions. The experimental parameters were as follows: the number of the long duration transactions was increased from 200 to 600, the maximum transfer amount was $350.00, the number of accounts was 200, and the number of traditional transactions was 60 000. Figure 7 shows that the number of long duration transactions has no obvious influence on the failure rate of long duration transactions for the LRUOW model. For the PP/T model, the failure rate increased slightly with increasing number of long duration transactions, from 2.60% to 4.71%, but was always lower than the failure rate for the LRUOW model.

Fig. 7 Influence of the number of long duration transactions

Fig. 5

Influence of the number of accounts

Figure 6 shows the influence of the number of the traditional transactions on the failure rate of long duration transactions. The experimental parameters were as follows: the number of the traditional transactions was increased from 50 000 to 90 000, the maximum transfer amount was $350.00, the number of accounts was 200, and the number of long duration transactions was 300. From Fig. 6, we can see that the failure rate for the LRUOW model increased moderately from 10.32% to 15.01%, while the failure rate for the PP/T model increased only slightly from 2.97% to 4.50%.

Fig. 6

Influence of the number of traditional transactions

The above simulation results show that the concurrency management ability of the PP/T model is better than that of the LRUOW model. The advantage of the PP/T model is more obvious when there is a higher possibility of semantic conflicts between transactions.

5

Conclusions

This paper proposed a new long duration transaction model — the PP/T model. It can provide full transaction support for long running business processes in enterprise applications. In comparison with the LRUOW model, the advantage of the PP/T model is that the failure rate of long duration transactions is greatly decreased. The key idea of the PP/T model is to postpone the actions of a long duration transaction to the committing time, so that the long duration transaction can be converted into a traditional short transaction. At the same time, constraints are exerted on the database state to ensure that the postponed operations can be successfully executed at the committing time. Simulation results show that the model has sound concurrency

WANG Jinling (ฅࠦঔ) et alġPessimistic Predicate/Transform Model for Long Running ĂĂ

management ability. Besides the theoretical analysis, we have designed a framework on the EJB platform to implement the PP/T model. The framework is actually a type of transaction processing middleware that supports long duration transactions. When developers build applications of long running business processes on the framework, the application is the direct reflection of the real-world business process, and the transactional semantics are provided by the underlying platform. Therefore, the development and maintenance effort are reduced. Acknowledgements We wish to thank the reviewers of this paper for their valuable comments on our approach.

[5] Du D H,

297

Ghanta S. A framework for efficient IC/VLSI

CAD databases. In: Proceedings of the 3rd International Conference on Data Engineering. Washington: IEEE Computer Society Press, 1987: 619-625. [6] Kuo D, Gaede V, Taylor K. Using constraints to manage long duration transactions in spatial information systems. In: Proceedings of the 3rd IFCIS International Conference on Cooperative Information Systems. Washington: IEEE Computer Society Press, 1998: 384-395. [7] Korth H F, Speegle G. Formal aspects of concurrency control in long-duration transaction systems using the NT/PV model. ACM Transactions on Database Systems, 1994, 19(3): 492-535. [8] Bennett B, Hahm B, Leff A, Mikalsen T, Rasmus K, Rayfield J, Rouvellou I. A distributed object oriented framework to offer transaction support for long running business

References [1] Sun Microsystems Inc. Enterprise JavaBeans Specification version 2.0. http://java.sun.com, 2001. [2] Garcia-Molina H, Salem K. Sagas. In: Proceedings of the ACM SIGMOD International Conference on Management of Data. New York: ACM Press, 1987: 249-259. [3] Korth H F, Levy E, Silberschatz A. A formal approach to recovery by compensating transactions. In: Proceedings of the 16th International Conference on Very Large Databases. San Francisco: Morgan Kaufmann Publishers, 1990: 95-106. [4] Kim W, Lorie R, Mcnabb D, Plouffe W. A transaction mechanism for engineering design databases. In: Proceedings of the 10th International Conference on Very Large Databases. San Francisco: Morgan Kaufmann Publishers, 1984: 355-362.

processes. In: Proceedings of IFIP/ACM International Conference on Distributed Systems Platforms and Open Distributed Processing. Berlin: Springer-Verlag, 2000: 331-348. [9] Agrawal D, Abbadi A E, Singh A K. Consistency and orderability: Semantics-based correctness criteria for databases. ACM Transactions on Database Systems, 1993, 18(3): 460-486. [10] Sun Microsystems Inc. Java transaction API specification, version 1.0.1. http://java.sun.com, 1999. [11] Gamma E, Helm R, Johnson R, Vlissides J. Design Pattern, Elements of Reusable Object-Oriented Software. Boston: Addison-Wesley, 1994.

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.