Experiences in Deploying Model-Driven Engineering

Share Embed


Descripción

Experiences in Deploying Model-Driven Engineering THOMAS WEIGERT, FRANK WEIL, KEVIN MARTH

Thomas Weigert is Professor of Computer Science and St. Clair Endowed Chair at the Missouri University of Science and Technology

Frank Weil is Chief Architect and Operations Director of the Motorola Software Engineering and Tools Technology Group

Kevin Marth is a Distinguished Member of the Technical Staff at Motorola

Motorola has successfully deployed model-driven engineering for the development of highly reliable telecommunications systems. Model-driven engineering has dramatically increased both the quality and the reliability of software developed in our organization, as well as the productivity of our systems and software engineers. Our experience demonstrates that model-driven engineering significantly improves the development process for embedded and distributed systems. But we have also observed many common roadblocks to higher adoption, and we summarize here the strategies we have implemented to work around these roadblocks. This experience reveals that significant efforts are needed to achieve these benefits repeatedly.

1 MDE Rollout Motorola has more than 15 years of history using model-driven engineering (MDE) techniques to develop highly reliable, large-scale telecommunication systems. Model-driven engineering has dramatically increased both the productivity of our systems and software engineers and the quality and the reliability of the developed embedded and distributed systems. Traditional development methods are based on code being the most important artifact (see Figure 1). In this view, it is the code that is put into configuration management, tested, and maintained, and the largest amount of development time is allocated to these activities. The typical shortcomings of this development style are delayed feedback, high cost of fixing defects, increasing maintenance costs, and requirements and design documents that rapidly become obsolete. In contrast, the MDE vision is that the model is the most important artifact (see Figure 2). Here, the model is put into configuration management, tested, and maintained, and the majority of the development time is concentrated on the design. Code then becomes merely another derived artifact to be discarded and regenerated whenever the model changes.

Requirements

Development artifacts

The shortcomings of traditional development are largely avoided: early feedback is obtained through model analysis and simulation, defect fixes are less costly because they happen earlier in the development, maintenance costs remain constant, and the design documents are correct by definition. At Motorola, specifications are expressed using UML profiles [1], [2], [3], [4] augmented by protocol specification languages such as ASN.1 [5]. Requirements specifications are formally validated for correctness such as consistency and completeness, as well as the absence of concurrency pathologies using theoremproving techniques along with state-space exploration [6]. Design specifications are also verified by operationally interpreting the specification and by executing formally defined test cases (written at the level of the design model in a test-specific notation, most often TTCN-3 [7], [8]) against this specification. The precisely defined semantics of the specification language also enables the derivation of application code from the models. We have captured domain-specific programming knowledge in code generators that transform the high-level design models into optimized product software targeted to the chosen platform [9]. Finally, the derived code is verified against the same test cases derived from the requirements [10] and is then deployed on the target platform.

Requirements

Model

Code

Test cases

Figure 1 Traditional development. The central artifact is the code, which is configuration managed and maintained

Telektronikk 1.2009

Code

Test cases

Figure 2 Development following the MDE vision. The model, as the central artifact, is configuration managed and maintained

21

The MDE vision has been realized in a number of Motorola business units after the model-based development process has been proven and refined through deployment pilots: In 1989, we designed a simulation environment for a proprietary design notation and piloted this environment in projects developing network elements [11]. When later the first commercial simulation tools became available for SDL [12], development teams began migrating to this standardized notation. In 1992, the complete software for a real-time embedded Motorola product (a pager) was generated from high-level designs for the first time, without relying on any hand-written code (this was a demonstration product that was not shipped). Subsequently, commercial code generation tools with the capabilities to generate partial product code from design models in the real-time embedded systems domain became available. Subsequently, several Motorola business units adopted design simulation as a new development paradigm. In 1998, the first Motorola products automatically derived from highlevel design specifications were shipped (a base station for the TETRA radio communication system and a base site controller for a telecommunications network). The subsequent years saw a steady increase in penetration of MDE, as legacy products were gradually replaced by newly developed network elements [13], [14]. In this article, we describe the most significant obstacles we have encountered in deploying model-driven engineering and summarize the technical and the (usually more important) non-technical solutions we put in place to address these obstacles. This article summarizes our experience in deploying MDE. There are many successful deployments of modeling techniques in other organizations. A comparison to those efforts or to other research programs addressing the roadblocks mentioned is beyond the scope of this experience report.

2 Challenges Encountered The reality in our large, distributed development organizations is that the penetration of MDE is not what it could be or should be. Not all projects are suitable for modeling, but there are several other reasons, both technical and non-technical, for this lack of more extensive use. This section discusses the challenges encountered in the rollout of MDE as well as some of the recommended solutions. Most of these challenges are technical in nature, but there are significant non-technical challenges as well.

22

2.1 UML is not MDE A first challenge encountered concerns how models should be represented. Most users when first attempting to model assume, encouraged by tool vendors, that ‘UML = MDE’. In other words, they assume that using UML is all there is to applying MDE. But UML is merely a family of languages, and not in itself a language. Taken in isolation, a UML model cannot even be completely understood and cannot serve as the central artifact of a model-driven process. UML was intended to provide a set of common concepts that modelers would encounter in describing typical applications. In order to support a wide variety of application domains that demand often conflicting and incompatible interpretations for these concepts, UML does not establish a precise dynamic semantics to the concepts it provides to the modeler. It specifies the meaning of the concepts it offers in broad strokes, and it relies on a large number of socalled semantic variation points where the meaning of these concepts might differ between application domains or tools. This allows developers to share a common base vocabulary, but it prevents the same developers from precisely understanding the details of the application that is described by a model unless they close the semantic variation points (that is, give precise meaning to the concepts utilized). As a simple example, the meaning of the communication illustrated in Figure 3 is not specified by the UML standard. When a message arrives at the port of A, one of three different behaviors may result: it may be forwarded to instance B, it may be forwarded to instance C, or it may be forwarded to both B and C. In the first two cases, the selection may be non-deterministic or it may be based on some other criteria, such as whether or not the connected instance specifies the message in its ‘provides’ interface.

B

A

C

Figure 3 Example of UML semantic variation point. The behavior of message passing in this situation is undefined in the UML standard. When an instance of A sends a message on its port, it could either go to the instance of B, the instance of C, or to both

Telektronikk 1.2009

As MDE requires precise semantics for its concepts, UML provides a mechanism (referred to as profile) to assign precise, domain-specific meaning to its constructs. A profile allows a modeler to select the subset of the concepts defined by UML that are relevant to the domain at hand, to give a precise semantics to these concepts where semantic variation points are encountered, and to extend these concepts with domain-specific detail. For example, telecommunication systems typically rely on asynchronous interactions between network elements while an automotive engine controller may rely on synchronous interaction of its components. If both systems are modeled in UML, anybody will understand the high-level behavior and structure of the system, but in order to verify the detailed behavior model or to generate code from a model, a profile must apply the respective interaction semantics to communication constructs. Applying such a profile creates a domain-specific language that shares the core of UML but may significantly deviate from other profiles in the details. Examples of domain-specific profiles that are typically applied to UML include the SDL profile [4], SysML [15], specialized structures for mapping calls to call handlers, and protocol queuing schemes for interpreting inter-component messaging. These characteristics of UML allowed it to become the lingua franca of modeling, but it can also stand in the way of successful deployment of MDE. Often, users do not understand this point, and vendors are not always forthcoming when advertising this aspect of their products. The requirement that a profile must be applied to a model before it can serve as the core artifact of a model-driven process implies that the user must make choices that cannot easily be undone: • Commercial tools differentiate by the profile or profiles that they support. As there are few standardized profiles, models created in specific tools are not interchangeable. • Not every tool is able to support every application domain. If the profile supported by a tool does not provide the required semantics for its concepts to adequately model an application in a particular domain, this tool cannot be used for the development of such applications.

dards organizations to ensure that their application domain is supported by an available profile. For example, our earlier experience in deploying modeling had revealed that the standard notations had shortcomings that limited the applicability of code generation. As a response, we engaged in the standardization of SDL and UML. In 1999, an enhanced version of SDL was adopted supporting language elements required by our engineering teams. In 2003, the latest release of UML was adopted, integrating the lessons learned from MDE deployment using SDL. Further, SDL itself has been recast as a UML profile targeted specifically at the development of reactive systems (that is, systems whose behavior is characterized by their response to environmental stimuli generated by the collaboration of concurrently executing system components). MDE encompasses all phases of the software-development life cycle and assumes that models are related to all aspects of the life cycle. However, most effort in the definition of notations to represent models has gone into models expressing software designs. Continued engagement is required to define and standardize notations to connect these design models to other phases of the life cycle. In particular, languages that allow capturing and managing requirements specifications and languages that help to define test specifications need further enhancements. For example, it would be beneficial if test specifications were tightly integrated into the modeling process, encompassing the tests themselves, the system model, and its environment for test applicability. Ideally, the tests would be required to be refined along with the model as it is refined, allowing for the same tests to be used at every stage of the development process. Even for designs, there is no commonly accepted action language to express the behavior of detailed computation in a platform-independent manner. Also, system-level concepts as well as concepts expressing non-functional system aspects are still wanting. Vendors have begun to recognize the need for customization of their products to particular domains. Most tools supporting UML now provide the capability to define profiles and the ability to customize the code generator to some extent. Significant investment is required in developing the expertise to define and maintain domain-specific profiles. Because interoperability standards at the profile level do not exist, these profiles are unique to the tools deployed.

Note that by “provide the required semantics” we mean that the tool fully supports the intended semantics of the concept for model analysis, verification, and application generation.

2.2 What is Modeling?

A consequence of requiring a profile that matches the application domain before MDE can be applied is that users must closely engage with vendors or stan-

When an organization chooses to pursue model-based engineering, a fundamental question must first be answered: what is modeling? More concretely, where does modeling stop and implementation begin? The

Telektronikk 1.2009

23

productivity and quality improvements described in [14], [16] are the result of a principled approach to MDE that requires an appropriate high-level modeling notation serving as the input to a model transformation process that ultimately generates the vast majority of the implementation for a modeled system. This position has several immediate implications that have shaped our deployment of MDE. • Programming languages such as C, C++, C#, or Java are not considered appropriate high-level modeling notations. • Capabilities in commercial modeling tools that enable a direct escape from a model into such programming languages are explicitly disallowed. • Approaches to modeling based on round-trip engineering are not supported. • Approaches to code generation based on skeletons or templates that are finalized via manual coding are not supported. These restrictions may seem controversial or perhaps even draconian. The current state of commercial modeling tools is such that mixing modeling notations and lower-level programming languages is implicitly assumed if not explicitly encouraged. The support for round-trip engineering in commercial tools enables source code to be generated from a model, modified manually, and then re-imported back into the model. Within the Model-Driven Architecture community, ‘pragmatic MDA’ approaches explicitly assume a hybrid mixture of generated code and code written manually [17], [18]. We found that engineering teams tend to approach modeling from two opposite viewpoints: One group of engineers views coding as a commodity activity best done by entry-level engineers. The other group views coders as ‘artisans’ and considers coding to be the point where skilled engineers impart their mark on a developed system. Naturally, MDE resonates well with the former population, while the latter prefers the ‘pragmatic’ approach and often uses models merely to augment hand-written code. We found that the benefits possible [14] are more likely to be realized by teams that fully espouse MDE. We have not found it necessary or desirable to relax the above restrictions. C and C++ remain the predominant implementation languages within our user community. Allowing the unrestricted manipulation of pointers and explicit memory management commonly encountered in C/C++ to seep into product models would greatly complicate the sophisticated analysis that enables the generation of highly optimized code.

24

Moreover, pointer manipulation and user-programmed explicit memory management in C/C++ are wellknown sources of defects that would compromise the quality and productivity gains that have been achieved via MDE. Proponents of Java emphasize the elimination of just these concerns when advocating Java as a safer and more productive alternative to C++. The combination of a restricted high-level modeling notation and a highly-optimizing code generator that implements memory management via ownership types has the advantages of increased productivity, quality, and safety as well as the advantage of predictable memory-management overhead and the elimination of any concerns (real or perceived) involving run-time garbage collection. Defect analysis of projects for network element development shows that the use of MDE completely eliminates defects that are root-caused to coding errors and lack of design detail, and can dramatically reduce defects due to incorrect requirements. Of course, these advantages do not come without considerable effort. It is unrealistic to expect product models not to reuse existing C/C++ libraries when appropriate. Before a library is reused within a model, the library interface must be sanitized. Typically, a UML package will be provided that abstracts any memory management or pointer manipulation that is explicit within the library interface. This ‘principled’ interface to the library allows the library to be reused without compromising the high-level modeling notation used in the model. This approach has resulted in the successful reuse of protocol and signal-processing libraries implemented in C/C++ and enabled high-level abstract data types for objects such as sockets that would not otherwise be immediately available within a model given the restrictions discussed above. The effort required to maintain the integrity of a highlevel modeling notation would be reduced if suitable UML-level packages and components were immediately available for reuse within the modeling community. Reusable UML-level components presumably imply a binary UML component model and vendors who deliver libraries with appropriate UML packaging. The Motorola MDE community has begun to consider implementing strategic protocols such as SIP as modeled components to facilitate the reuse of such protocols within product models.

2.3 Modeling is Hard We have observed that, through modeling, the time required for a new developer to acquire sufficient domain knowledge to become productive has been shortened by 2-3 times [14]. Modeling a system makes it much easier to communicate information

Telektronikk 1.2009

about the system between developers. However, the creation of the right model abstractions is harder than developing concrete programs. Abstraction skills come with experience and are therefore difficult to teach. One can teach the constructs required for the specification of abstract concepts, but to acquire abstraction skills one must be experienced in the abstraction process. This is the process of creating a mental picture of a concept after having been exposed to many concrete instances of that concept first. Inexperienced developers are therefore much more comfortable at the concrete level. Inexperienced developers typically know a particular programming language very well and employ a specific programming style. Consequently, such developers will try to emulate that style when they start modeling. Modeling languages allow one to specify entities that emphasize various perspectives of the system, and some of these perspectives are likely to be unfamiliar to a developer (eg. the state-machine perspective). A developer might be very skilled at abstracting out the implementation of methods or objects using object-oriented constructs, but that same developer might not be able to see the abstract state machine decomposition. For example, developers often assume that states must be related to system responses to external stimuli, and they therefore encode other state information in variables used within the state machine. The true states of the system are thus often embedded inside the transitions between the explicit states and are consequentially hidden within the arcs shown in state diagrams. The overall behavior of the system will be much clearer if multiple states reflecting the underlying system states are used in this situation, in particular when decision points and flow of control on transitions obscure the meaning of state diagrams even further. Another reason why modeling is hard is that the current modeling tools do not support appropriate refinement and abstraction mechanisms to gradually refine abstract (incomplete) system models down to a detailed design model from which the implementation can be derived. There is also a lack of a standardized, well-defined process for model refinement. Mechanisms are lacking to enforce or ensure that the detailed design model corresponds to the behavior of the more abstract models. If abstract views are defined in the early phases of modeling, they are destined to become stale and useless as development progresses. Therefore, unfortunately, software development is raised only to the level of abstraction that the code generator provides, but not higher. This is just another example of the ‘code twice’ phenomenon: When first attempting to deploy design verifi-

Telektronikk 1.2009

cation through model simulation in our organization, we found that many development teams resisted because they felt that the additional effort required to make their design models operationally interpretable amounted to having to ‘code twice’, first at the level of the design and then when they wrote the code based on that design. In spite of significant demonstrated quality benefits, design simulation was not used until the ability to generate implementations automatically from designs became available. Users will only create the lowest-level artifact to which automation can be applied. Finally, some system requirements are hard to model using UML. Examples of such requirements are security, availability, recoverability, and reliability. These concerns are often the critical differentiating features of the system and require expertise in specific domains. This problem is exacerbated when one raises the level of abstraction because these concerns do not have their own higher-level abstractions and it is very unclear how they interact with the other abstract components. Therefore, these requirements are often only handled at the concrete level where their implementation is necessarily scattered over the other components, requiring all developers to be aware of these concerns. Domain-specific modeling languages provide abstractions that exactly fit a particular domain and therefore can significantly simplify the modeling pro-

20 % statically computable

2 % decision points 20 % unbounded integers

80 % underconstrained 3 % other

75 % range bounds

Figure 4 Aggregated results of semantic analysis of over 50 production models. Of the 4-6 semantic errors present on average, the majority concern violation of range bounds as well as the reliance on the assumption that integers can be arbitrarily large. The remainder concerns the handling of conditions in branches

25

cess for that domain. These languages contain concepts that directly map to the concepts of the domain itself. The cognitive distance from the problem domain to the software domain is drastically reduced, leading to fewer defects and less effort. Even concerns that are hard to modularize in general-purpose modeling languages such as UML can have explicit mechanisms in the form of domain-specific properties. In [19], [20], [21], [22] we discuss how aspectoriented modeling together with domain-specific modeling languages can make it easier to develop models that are both easier to maintain and can serve as starting points for implementation. Aspect-oriented modeling offers an encapsulation and abstraction mechanism for features that are normally spread over multiple components of the system. We also found it essential that tools are available that can identify situations in models that either are semantically incorrect or are semantically suspect (and are therefore a likely source of errors) although not technically incorrect in order to aid developers in the modeling process. Applying this technique to more than 50 production models revealed that a typical model has many issues even after commercial modeling tools indicate that the model has no errors or warnings. On average, four to six major semantic errors and 1200 warnings are present. Figure 4 shows a categorization of the issues found. In this figure, ‘Range Bounds’ refers to using a value that is outside of the declared range for the type. Many of these range bounds issues relate to places where a developer has additional external information that should be made explicit in the model. The “Unbounded Integers” issues refer to the model assuming that integers can be arbitrarily large, which, depending on the target platform, can lead to a system that is impossible to implement or unacceptably slow. The ‘Decision Points’ issues relate to conditional branch points: ‘Underconstrained’ decision points are places where possible values of the decision expression are not covered in the branches, and ‘Statically Computable’ decision points imply that one or more of the decision branches contain dead code.

2.4 When should Modeling be Applied? In most development organizations, a very large amount of legacy code exists. In telecommunications infrastructure, it is not unusual for a product to have a useful field life of well over a decade. This leads to two questions when considering modeling: (i) Can legacy code be integrated with models? (ii) When is it worth transitioning a product to modeling? The answer to the first question is ‘technically yes’, but there are many challenges. For example, the interfaces between the model and the legacy code must

26

not adversely affect the model. The lack of data abstraction and the use of pointers and memory management should not force the creation of a low-quality model. Reusing a small, well encapsulated algorithm as an external operation is typically not a problem, but integration of large percentages of legacy code leads back to the second question. The second question only has a clear-cut (negative) answer if the product is at its end of life and no additional development and very little maintenance will be done. Note that automated reverse engineering is rarely a good long-term solution; tool support for reverse-engineering models works at the structural level of the code and not at the conceptual level. One would like a reverse-engineered model to be similar to one that would have been created from scratch, with state machines, good conceptual organization through the use of Composite Structure Diagrams, etc. Given the lack of tool support for this, it is more productive over the life of a product to create a model from scratch and to use the legacy code as the behavioral and performance benchmark by which the model is judged. In the general case, a group must evaluate the expected return on investment (ROI) of each project based on established cost/benefit guidelines and the intended use of the models. For example, is it worth creating a full behavior model or is it sufficient to create Sequence Diagrams and a high-level system architecture diagram? Any ROI calculation assumes that one can estimate the costs based on different development methodologies. This, in turn, implies that standardized metrics and baselines exist. In the case of modeling, this is largely not true. Some efforts have recently begun to create standardized metrics [23], but this work is still in its preliminary stages. Some projects may not be appropriate for full MDE with code generation, such as small, one-off projects with short field lives. In a similar vein, projects must use the appropriate modeling methods and tools. For example, while it would be possible to model the layout of a GUI in UML, it would most likely be more effective to use a tool customized for that purpose. Organizational guidelines must be created to help determine which projects will use modeling and for the selection of supporting tools and methods. We have found that, in practice, the less variability in the recommendations the better. That is, as little choice as possible should be left to the developers. Economies of scale dictate that the more development projects use a single method and/or tool, the more leverage the overall organization has. In addition, common

Telektronikk 1.2009

methods and tools lead to more transferable skills and experience.

2.5 Model-Size Metrics Metrics are crucial not only for project management but also to be able to compare development methods. Source lines of code (SLOC) is often used as the size metric (the base unit) for code. However, the concept of lines of code does not readily apply to graphical modeling languages such as UML and SDL. SLOC deals with the end product, essentially averaging the relative contributions of the other phases of the life cycle. With modeling, two new possibilities emerge: to take into account the contributions of individual life-cycle phases, and to measure activities that do not directly equate to the creation of code (eg. systems analysis). The questions that must be answered, then, are what is the size measure for models, and how should modeling artifacts such as Sequence Diagrams be accounted for since they were not previously counted in SLOC? Also, one cannot cause a discontinuity by discarding the current SLOC-based metrics baseline. This leads to a follow-on question of how the model metrics can be related to the current SLOC-based metrics. Simply counting the elements in a model to produce a single number will not provide a meaningful result – too much information regarding the relative contributions of different types of elements is lost. For example, Use Case and Sequence Diagram elements will have much less impact than State Machine elements when trying to determine how much code may be generated from a model, whereas they probably have as much or more impact when trying to determine the quality of the system. Thus, each element type should have an associated weight (possibly zero) for each context where an overall model metric may be used. A weight vector, consisting of all the element-type weights, can be created for each such context (eg. a quality vector, a productivity vector, a code-generation vector, etc.). Expressed as an equation, the vector of model metric values is computed as ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ M1 W11 · · · W 1m C1 ⎢ .. ⎥ ⎢ .. ⎥ .. ⎥ × ⎢ .. .. ⎣ . ⎦=⎣ . ⎦ . . ⎦ ⎣ . Mn

Wn 1

···

Wnm

Cm

In this equation, Cj is the count of model element of type j, Wij is a weight assigned to a model element of type j for a given metric i, and Mi is the resultant value for metric i. We have developed a tool that allows weight vectors to be defined and then applies them to the element counts in a model. Creating the weight vectors is, of course, a challenge given the lack of data on which to base the weights. Our pre-

Telektronikk 1.2009

liminary data shows that several of the element counts are highly correlated (eg. signals and transitions). Using statistical techniques to reduce the redundancy and to determine optimal weight values would be ideal, but would require large amounts of input data. Given the relatively small number of full modeling projects from which this data can currently be extracted, efforts such as ReMoDD [24] may be of great help in gathering and comparing weights across a wide variety of projects and benchmark models so that data can be validated externally.

2.6 Inadequate Tool Support Complete and integrated modeling-tool support for the entire life cycle – from requirements through field support – does not exist. The sheer number of target platforms, third-party development tools and libraries, operating environments, etc., ensures that no single tool chain will satisfy every need. While initiatives such as Eclipse help with some of the integration issues, other factors such as the lack of common semantics effectively block true interoperability. Semantic variations in UML and the reliance on profiles mean that no truly unified tool support can exist, at least not without heavy customization. While commercial tools are available for various aspects of MDE, especially those centered around the model-creation and testing activities, significant gaps and shortcomings exist that need to be addressed through tool creation and customization. For example, Motorola devotes a considerable amount of resources to the development of the tools listed in Table 1. Not all these tools are available from commercial vendors, and integration between tools from different vendors is often lacking. Given that some tool creation and customization is inevitable, it is very important to avoid duplication of efforts in this area. Within an organization, some group must own the responsibility for portfolio management. The more encompassing within a company this management is, the greater the potential savings. The portfolio management team has several functions: to look for and minimize duplication of effort, to recommend tools and methods to be used, to find gaps in the current tool chains, to work with internal groups and with tool vendors to fill those gaps, to define common profiles, to define and develop appropriate training, etc. Through this coordinated effort, the benefits of tool and method usage can be maximized. Given that tool vendors differentiate themselves by the profiles they use to close the semantic holes and

27

Tool category

Investment

Requirements management

Extensions to commercial tools to provide a globally accessible repository for features, requirements, product configuration, estimates, test, and other information to support the requirement management needs of marketing and engineering organizations throughout the product-planning and development life cycle.

Requirements validation

Creation of theorem-provers and model checkers that examine system behavior and detect pathologies at the level of Sequence Diagrams and behavioral models.

Domain-specific modeling extensions

Extensions to the modeling languages for exception handling based on telecommunications needs, specialized languages for protocol specification, and the creation of specialized abstract data types typical of call handling.

Aspect-oriented modeling

Creation of tools and techniques to represent system aspects in design models.

Protocol specifications

Creation of tools for generation of signals, signal lists, and data-type definitions based on protocol specifications, generation of encoding and decoding functions, creation of custom interfaces to network analyzers, etc.

Model simulation based on Creation of tools for the simulation and testing standardized test languages of UML and SDL behavioral models in the modeling environment itself based on standardized test notations such as TTCN-3. Model analysis

Creation of tools for applying semantic rules that look for situations in models that either are semantically incorrect or are semantically suspect (and are therefore a likely source of errors) although not technically incorrect.

Full automatic code generation

Creation of tools for full rule-based generation of target code from SDL and UML models (and the related protocol specifications) with actions specified in a target-independent manner.

Test suite generation

Creation of tools that automatically generate complete test suites based on test specifications.

Test suite optimization

Creation of tools that determine optimally small test suites based on test specifications.

Test management

Creation of tools and glue code for the integration of test specifications and generated code with platform-specific test frameworks.

Traceability

Integration of third-party and locally created traceability solutions across all modeling artifacts and all phases of the development process.

Metrics collection and analysis

Creation of tools that analyze models to extract data relevant for the generation of metrics.

Report generation

Creation of tools for the customized export from commercial modeling tools of model reports in HTML, XML, MIF, etc., for review and documentation.

Configuration and build management

Integration of commercial modeling tools with third-party and locally created configurationmanagement tools, build tools, and databases.

Table 1 Modeling tool investments in Motorola

28

variations in the UML specification, it would be detrimental for an organization to permit multiple tools to be deployed. Not only would the tool-specific skills then differ among engineers; even basic skills such as how to model an application would have to be specific to each development team. The cost, as well as the negative impact this would have on the ability of engineers to migrate between teams, would be a serious detriment. Naturally, vendors not selected attempt to circumvent efforts in centralization, and grass-roots users are often not pleased in having to give up the freedom to select a tool of their liking. Nevertheless, strict control of the tools deployed is essential to enable reuse between organizations. Many, if not most, organizations that have significant modeling usage have had individual development teams produce some of their own tools. Once a customized tool exists, it can be costly and time consuming to replace it. To transition to a standardized tool chain, one or more organizations will have to bear the cost. In addition to managing the tools portfolio, it helps if automated tools are created to aid the transition to common tools. For example, format translation tools can minimize the cost of moving from one proprietary language to another, and the move can be done as a one-time effort.

2.7 Platform Variability As discussed in [14], a number of advantages and efficiencies accrue when product engineers focus on producing a model free of platform detail. Ultimately, a modeled system must be targeted to a platform in order to deploy the functionality provided by the model. The support offered by commercial modeling tools for platform targeting tends to be incomplete and very generic. The effort required for platform targeting is often underestimated and not fully appreciated when organizations consider MDE. Transforming a platform-independent model to a deployable model that satisfies quality-of-service and performance requirements in addition to functional requirements typically requires a broad and detailed base of expertise that comprises familiarity with model-based system development, the product domain or domains, and the various (perhaps proprietary) components of the platforms of interest. The nature of the required expertise is a principal reason why commercial offthe-shelf code generators are often unequal to the challenge without resorting to tuning by the vendor, which further marries an MDE effort to a particular supplier. In practice, we have found that the most effective approach is to delegate the responsibility for platform targeting to a code generator developed inhouse, thereby treating the expertise required for platform targeting as part of the overall knowledge base

Telektronikk 1.2009

More concretely, consider what is required to implement the seemingly simple abstract UML operation of sending a signal via an external port to another system. The signal, after appropriate encoding and packing as a protocol data unit, must be transported to the destination system via some form of inter-process communication (IPC). Examples of IPC mechanisms include sockets, shared memory, message queues, and mailboxes. It is common for systems to use multiple IPC mechanisms, and each IPC mechanism may have different interfaces and semantics. Depending upon the underlying operating system, an IPC mechanism such as a message queue may have specific extensions that enable some operations to be executed more efficiently, such as attaching a priority to a message. When relevant, it is desirable to take advantage of such platform-specific extensions. An operating system may not fully comply with the standards that specify the semantics of an IPC mechanism. The semantics of an IPC mechanism may have aspects that are defined by their implementation and are therefore unique to a particular operating system and perhaps even vary across different versions of a particular operating system. The code generator must have knowledge of all these possibilities and the ability to generate the most efficient code for the target platform. When one considers that IPC is just one element of an overall platform message service, that a message service is just one of a dozen of platform services typically required by telecommunication models, and that platform services are duplicated by

codified by the code generator [25], [26]. Adopting such an approach implies that an internal code-generation team must be adequately and appropriately staffed to service platform-targeting requests from disparate product groups with aggressive schedules. Because a platform is composed of both software and hardware, platform targeting requires considerable effort and a broad base of expertise. A software platform is composed of several components, including the operating system, runtime libraries, and middleware layers that provide services to a model-based system. There are multiple layers of functionality, and the components within the layers may each have multiple versions and multiple implementations based on varying performance characteristics (see Figure 5). To implement the functionality of a model, specific concrete implementations of each behavioral aspect of the model and modeling language must be chosen. For example, when a model sends messages on links between ports, that functionality must be explicitly implemented in the generated code. The selection of this implementation must take into account available operating system and middleware mechanisms, performance constraints, interoperability with the chosen memory-management scheme, etc. A combinatorial explosion of software platforms quickly results when several choices exist for each of the components of a software platform, and this explosion is exacerbated when multiple versions of a specific choice of software component also exist.

Modeling concept

Platform implementation

Timers

Dedicated abstract APIs

Behind the scenes

Code generator virtual platform

Signals/messages . . .

Message queues Callback functions Protocol decoding/encoding Event reporting/logging Statistics Interrupt/signal handlers .....

Timers

Checkpointing Database API

Inter-process communication Thread/process management Timer management Memory management Platform endian-ness

Figure 5 Mapping models to implementations. Specific concrete implementations must be chosen for each modeling concept among many possibilities provided by the platform

Telektronikk 1.2009

29

multiple competing middleware offerings, the scope of the effort and expertise required to effectively target models to platforms for an organization may be more fully appreciated. It is, of course, advantageous if common software platforms are defined and adhered to both within and across product families. Unfortunately, achieving reuse of a common software platform is often very difficult because of existing legacy code, time-tomarket constraints, and organizational and political boundaries. Opportunities to converge on a common software platform must be anticipated well in advance and require input to product roadmaps from the organizations championing MDE. The scope and requirements of a common software platform from the perspective of MDE are often not well understood. Middleware designers with previous experience in applications development using traditional languages are often tempted to provide a middleware API for services such as memory management and timer management on top of the low-level services provided natively by the operating system. This is typically unnecessary for MDE because these sorts of services are completely abstract to a proper platformindependent model, and such a ‘convenience’ API is therefore of no direct benefit to the product engineers constructing a model and is often at the wrong level of abstraction to be effectively reused by the code generator. The existence of competing software platforms with similar but divergent service interfaces has proved to be a significant impediment to the speed and scope of modeling penetration within Motorola. The ultimate resolution of the issue of platform variability lies in conformance to a small number of fully implemented standards that range from lower-level operating system standards, such as UNIX 03, to higher level middleware standards that are MDE-aware and span entire industries, such as a high-availability telecommunications infrastructure. Considerable progress and maturity have been achieved in the case of the former. In the case of the latter, much remains to be done.

2.8 Non-Technical Challenges Many of the barriers to MDE adoption are more organizational or perceptual than technical. One common perception that relates to the modeling community as a whole is that MDE is not up to the challenge of meeting the needs of real-time systems. That is, there is a perception that for systems that must be ‘programmed down to the metal’, modeling and code generation are of little use. The publication of case studies from applicable industrial projects would help negate this perception. It is unfortunate

30

that most modeling examples are either at the systems level, involve very simple models (eg. the classic vending machine), or lack sufficient details for one to be able to draw conclusions about the applicability of the techniques to a different organization. An effort to build a repository of models for comparison purposes, such as ReMoDD [24], would also help in this area. Many legacy processes exist, and each has its own tools support, metrics, etc. It is unrealistic to expect an organization to abandon its baseline of metrics data when a new paradigm such as modeling is applied. However, there is essentially no support available to bridge the gap between management of traditional development projects and modeling projects. Establishing common metrics definitions for modeling, especially for fundamental concepts such as model size, is critical. Once standardized definitions are available, tool support can be created more easily. There is a lack of modeling training targeted to senior leaders, middle managers, and developers. Each of these groups has different concerns and different evaluation criteria. Senior leaders are most concerned with how the proposed changes will affect the bottom line in terms of cost, quality, or cycle time. Middle managers are most concerned with how one can tell if a project is on track, the additional skill sets and training necessary, how the personnel loading curve will look, and the tool support that is needed. Developers are most concerned with the details of the development process, such as how the modeling languages and tools are used. We recommend that a modeling curriculum for each of these separate groups be implemented. Each organization can apply this curriculum to its specific needs and roll out training through a central organization. Along these same lines, though, there is a general lack of in-house, experienced trainers and consultants. While vendors can provide training and consultation, such training is typically too focused on the tools and languages and not enough on the process and domain relevant to the development organization. An organization must make the most use of what expertise is available and must widely disseminate best practices. It is highly recommended that a modeling users group be established as a general forum for questions and reporting experiences. Establishing criteria for modeling expertise will also help minimize the problem of self-proclaimed experts steering organizations in the wrong direction. A pervasive attitude in software development in general, and one that is particularly vexing to organizations trying to use modeling, is that software develop-

Telektronikk 1.2009

ment is perceived to be an art, not a science. A similar attitude, but from the organizational side, is holding on a pedestal the coder who regularly performs last-minute heroics to save a project. This reinforces the behavior that one can make a career of fixing the software, and downplays the organizational problems that led to the need for the heroics in the first place. From this perspective, it is important to establish official recognition and a career path for the modeler as expert. It is also important to emphasize that modeling is good for the developer as implementation, especially coding, becomes more and more of a commodity skill. That is, instead of viewing modeling as a career killer for coders, it should be viewed as a chance to highlight the developer’s knowledge of the domain and system. For example, it is expected that all programmers know how to traverse a linked list, but knowledge of how to correctly handle error conditions in a cellular base station is relatively rare. Another pervasive attitude is the stigma attached to efforts from internal organizations. Little credibility is typically given to processes or tools that are neither from within the individual development team nor from a credible external source. If not developed by the particular team (‘not invented here’), any process or tool not backed by a highly visible external entity (either de facto standard organizations, such as the Software Engineering Institute, or leading tool vendors) is suspect. Modeling has no such entity focused on process, and there does not appear to be any organized direction from tool vendors. The lack of easily accessible, published reports describing the experience gained from the deployment of modeling and the benefits obtained (or not) greatly hampers widespread roll out. An industry consortium is needed that can collect and disseminate modeling metrics, best practices, benchmarks, required capabilities and training, and so on. A major rollout issue is that many mid-level managers are risk-averse. There is often little reward for innovative success, but a large penalty for failure. Without a high-level push from senior management, it is very difficult to roll out modeling on a large scale. It is critical to collect metrics that are relevant to the types of projects on which an organization works and to use those metrics to educate management. These metrics can be used to establish scorecard goals that reward success and penalize the status quo. Issues about tool usage also exist: the diversity of the tools landscape is daunting, and costs can be high for both internal and external tools and the associated training. Creating ‘cookbooks’ to aid in navigating the tool and process choices would help, as would a much increased level of tool interoperability. High

Telektronikk 1.2009

tool costs are not in themselves bad, but they do require establishing savings and ROI models to justify them. In general, there are hidden costs in development processes, and the perceived costs of internal and external tools may not match reality.

3 Summary In earlier articles [14], [16] we have described the benefits that Motorola has obtained from the deployment of modeling in the development of high-reliability telecommunications systems. The benefits afforded by modeling result in both quality improvements and productivity improvements. In the development of telecommunication systems, a large portion of developed software, in our estimate at least 75% of the total software, is amenable to leveraging MDE techniques. We have summarized common roadblocks we have encountered when deploying modeling: Among the technical challenges, there is still a lack of modeling skills among engineers; standard process elements, such as metrics, have not yet been refined for modeling; and fully adequate tool support is often not available. Platform variability and the presence of large amounts of legacy code also tend to stand in the way of deploying MDE, as do organizational and perceptual barriers. We have outlined strategies we have implemented to work around those roadblocks. Many of the individual solutions are mentioned above. They are summed up in Table 2. Within Motorola, many of these recommendations have been successfully implemented. In addition, significant investment in several research programs was required

Steps taken

Section

Create an internal governance board that sets tool recommendations and establishes process-selection guidelines and an exception process

2.1, 2.2, 2.4, 2.6

Create and participate in an industry-wide modeling consortium

2.8

Make training and information on processes, languages, tools, etc., readily available, and make it known who the experts are

2.6, 2.3

Establish an internal modeling advisory board to collect, prioritize, and disseminate best practices, metrics, tool and vendor issues, etc.

2.4, 2.6, 2.5

Establish dialogs with senior management based on established metrics and ROI projections

2.8

Create and use domain-specific languages where they make sense. Participate in language standardization

2.1

Capture platform expertise in internally developed code generator

2.7

Table 2 Solutions summary. Concrete steps taken in Motorola to address issues as described in the indicated Sections of this article

31

to realize this vision; these research programs focused on making modeling easier and more accessible to our engineering population [27], [28], [1], [29], verifying the correctness of the developed models at the requirements and design stages [30], [31], [32], [33], enabling the generation of product-quality code from these models [25], [34], [26], and establishing the conformance of the generated code to the system requirements [35], [36].

9 Weigert, T, Dietz, P. Automated Generation of Marshalling Code from High-Level Specifications. SDL 2003: System Design. Lecture Notes in Computer Science, 2708, Springer, 2003.

The reference section below includes pointers to additional detail discussing our experience in applying model driven engineering to the development of large telecommunications systems.

11 Neczwid, A, Weigert, T, Weil, F. Rigorous Requirements Specification and Validation. Proc. Structured Development Forum XIII, Philadelphia, September, 1993.

References

12 ITU. Specification and Description Language. Geneva, International Telecommunications Union, 2000. (ITU-T Rec. Z.100)

1 Weigert, T, Reed, R. Specifying Telecommunications Systems with UML. In: Lavagno, L, Martin, G, Selic, B. UML for Real: Design of Embedded Real-Time Systems. Amsterdam, Kluwer, 2003. 2 Weil, F, Weigert, T. Guidelines for Using SDL in Product Development. Proc. 4th International SDL and MSC Workshop: Security Analysis and Modeling. Lecture Notes in Computer Science, 3319, Springer, 2004. 3 Haugen, Ø, Møller-Pedersen, B, Weigert, T. Introduction to UML and the Modeling of Embedded Systems. In: Zurawski, R. The Embedded Systems Handbook. Miami, CRC Press, 2005. 4 ITU. SDL Combined with UML. Geneva, International Telecommunications Union, 2006. (ITU-T Rec. Z.109) 5 ITU. Abstract Syntax Notation One (ASN.1). Geneva, International Telecommunications Union, 2002. (ITU-T Rec. X.680) 6 Letichevsky, A et al. System Specification with Basic Protocols. Cybernetics and System Analyses, 41 (4), 479-493, 2005. 7 Baker, P, Rudolph, E, Schieferdecker, I. Graphical Test Specification – The Graphical Format of TTCN-3. Proc. 10th International SDL Forum: Meeting UML. Lecture Notes in Computer Science, 2078, 148-167, Springer, 2001. 8 Baker, P et al. A Message Sequence Chart-Profile for Graphical Test Specification, Development and Tracing – Graphical Presentation Format for TTCN-3. Proc. 18th International Conference on Testing Computer Software, 2001.

32

10 Baker, P et al. Model Driven-Engineering Testing Environments. Proc. UK Software Testing Research III, 185-196, Sheffield, September, 2005.

13 Baker, P, Loh, S, Weil, F. Model-Driven Engineering in a Large Industrial Context – Motorola Case Study. Proc. 2005 MoDELS Conference. Lecture Notes in Computer Science, 3713, Springer, 2005. 14 Weigert, T, Weil, F. Practical Experiences in Using Model-driven Engineering to Develop Trustworthy Computing Systems. Proc. 2006 IEEE International Conference on Sensor Networks, Ubiquitous, and Trustworthy Computing, Taichung, 208-217, June, 2006. 15 OMG. OMG Systems Modeling Language (SysML). Object Management Group, September, 2007 (SysML Specification v1.0 (formal/07-0901)) 16 Weigert, T et al. Experiences in Deploying Model-Driven Engineering. Proc. 12th International SDL Forum: Design for Dependable Systems. Lecture Notes in Computer Science, 4745, Springer, 35-53, 2007. 17 Kleppe, A, Warmer, J, Bast, W. MDA Explained: The Model Driven Architecture Practice and Promise. Boston, Addison-Wesley, 2003. 18 Mellor, S, Scott, K, Uhl, A, Weise, D. MDA Distilled: Principles of Model-Driven Architecture. Boston, Addison-Wesley, 2004. 19 Zhang, J, Cottenier, T, van den Berg, A, Gray, J. Aspect Interference and Composition in the Motorola Aspect-Oriented Modeling Weaver. Proc. 9th International Workshop on Aspect-Ori-

Telektronikk 1.2009

ented Modeling at the 9th International Conference on Model Driven Engineering Languages and Systems, Genova, October, 2006. 20 Cottenier, T, van den Berg, A, Elrad, T. Model Weaving: Bridging the Divide between Translationists and Elaborationists. Proc. 9th International Workshop on Aspect-Oriented Modeling at the 9th International Conference on Model Driven Engineering Languages and Systems, Genova, October, 2006. 21 Cottenier, T, van den Berg, A, Elrad, T. The Motorola WEAVR: Model Weaving in a Large Industrial Context. Proc. Industry Track at the 6th International Conference on Aspect-Oriented Software Development (AOSD’06), Vancouver, March, 2007. 22 Cottenier, T, van den Berg, A, Elrad, T. Stateful Aspects: The Case for Aspect-Oriented Modeling. Proc. 10th International Workshop on AspectOriented Modeling at the 6th International Conference on Aspect-Oriented Software Development (AOSD’06), Vancouver, March, 2007. 23 Weil, F, Neczwid, A. Summary of the 2006 Model Size Metrics Workshop. Models in Software Engineering. Lecture Notes in Computer Science, 4364, 205-210, Springer, 2007. 24 France, R, Bieman, J, Cheng, B. Repository for Model Driven Development (ReMoDD). Models in Software Engineering – Workshops and Symposia at MoDELS 2006, Reports and Revised Selected Papers. Lecture Notes in Computer Science, 4364, 311-317, Springer, 2007. 25 Boyle, J, Harmer, T, Weigert, T, Weil, F. Knowledge-Based Derivation of Programs from Specifications. In: Bourbakis, N. Artificial Intelligence And Automation. Singapore, World Scientific Publishers, 1996. 26 Dietz, P et al. Practical Considerations in Automatic Code Generation. In: Tsai, J, Zhang, D. Advances in Machine Learning Application in Software Engineering. Hershey, Idea Group Publisher, 2006. 27 Selic B et al. SDL as UML: Why and What? Proc. 2nd International Conference on the Unified Modeling Language. Lecture Notes in Computer Science, 1723, 446-456, Springer, 1999.

Telektronikk 1.2009

28 Garlan, D et al. Modeling of Architectures with UML. Proc. 3rd International Conference on the Unified Modeling Language. Lecture Notes in Computer Science, 1939, Springer, 2000. 29 Haugen, Ø, Møller-Pedersen, B, Weigert, T. Structural Modeling with UML 2. In: Lavagno, L, Martin, G, Selic, B. UML for Real: Design of Embedded Real-Time Systems. Amsterdam, Kluwer Academic Publisher, 2003. 30 Baranov, S, Kotlyarov, V, Letichevsky, A, Weigert, T. Leveraging UML to Deliver Correct Telecom Applications. In: Lavagno, L, Martin, G, Selic, B. UML for Real: Design of Embedded Real-Time Systems. Amsterdam, Kluwer Academic Publisher, 2003. 31 Letichevsky, A et al. Basic Protocols, Message Sequence Charts, and the Verification of Requirements Specifications. Proc. 15th International Conference on Software Reliability Engineering, Rennes, November, 2004. 32 Letichevsky, A et al. Basic Protocols, Message Sequence Charts, and the Verification of Requirements Specifications. Computer Networks, 47, 2005. 33 Kapitonova, J, Letichevsky, A, Volkov, V, Weigert, T. Validation of Embedded Systems. In: Zurawski, R. The Embedded Systems Handbook. Miami, CRC Press, 2005. 34 Weigert, T. Lessons Learned from Deploying Code Generation in Industrial Projects. Proc. International Workshop on Software Transformation Systems, International Conference on Software Engineering, Los Angeles, 1999. 35 Baker, P et al. Automatic Generation of Conformance Tests from Message Sequence Charts. SDL Workshop 2002, Telecommunications and Beyond: The Broader Applicability of SDL and MSC. Lecture Notes in Computer Science, 2599, 170-198, Springer, 2003. 36 Rao, B, Timmaraju, K, Weigert, T. Network Element Testing using TTCN-3: Benefits and Comparison. Proc. 12th International SDL Forum: Integration of System Design Languages. Lecture Notes in Computer Science, 3530, Springer, 2005.

33

Thomas Weigert is Professor of Computer Science and St. Clair Endowed Chair at the Missouri University of Science and Technology. Prior to joining MS&T, he was Fellow and Vice President with Motorola where he was responsible for the development and provisioning of all common engineering tools and for the coordination of engineering tool development and investments across Motorola. Before joining Motorola, he was Assistant Professor of Mathematics at the Johannes Kepler University in Linz, Austria, and held visiting positions at the Electrotechnical Laboratories, Tsukuba, Japan, and Argonne National Laboratories, Argonne, USA. He is the author of a text book, seven international standards, as well as over forty refereed publications on the application of Artificial Intelligence techniques to the development of product software, in particular for real-time distributed systems. He received his PhD, MSc, and MA degrees from the University of Illinois, and an MBA from Northwestern University. [email protected]

Frank Weil is Chief Architect and Operations Director of the Motorola Software Engineering and Tools Technology Group. He has worked on various aspects of MDE for 20 years, including the areas of modelling languages, automatic code generation from design models, and various related technologies such as model checking and feature interaction. He has consulted with numerous diverse product groups on their MDE projects. Frank has authored approximately 20 refereed publications spanning a wide range of software engineering topics. He has served on several Motorola corporate technical advisory boards, including being Chair of the MDE Technical Advisory Board. He is an Industry Advisor of ReMoDD and has served in various capacities with numerous conferences. Frank received his doctorate from Purdue University. [email protected]

Kevin Marth is a Distinguished Member of the Technical Staff at Motorola. He has over 20 years of industrial software experience developing and deploying cellular infrastructure products and software engineering technologies, particularly automatic code generation. Kevin is a PhD candidate at the Illinois Institute of Technology. His research interests include programming paradigms, languages, and compilers, as well as concurrent, distributed, and parallel programming. [email protected]

34

Telektronikk 1.2009

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.