Please refer to www.henrymuccini.com for the most up to date information.
A Software Architecture (SA) can be considered as the earliest model of the whole software system created along the software lifecycle. Following a “traditional” definition, an SA consists of a set of components and connectors communicating through interfaces. From a different perspective, an SA consits of a set of architecture design decisions taken to generate the architecture artifact. Indeed, the two definitions are not
in contrast, but they are simply orthogonal. In both cases, when an SA is identified, it needs to be described through an architectural language (AL).
Interoperability among different ALs: many ALs have been proposed in the last fifteen years, each one with the chief aim of becoming the ideal language for specifying software architectures. What is evident nowadays, instead, is that architectural languages are defined by stakeholder concerns.
Capturing all such concerns within a single, narrowly focused notation is impossible. At the same time it is also impractical to define and use a “universal” notation, such as UML. As a result, many domain specific notations for architectural modeling have been proposed, each one focussing on a specific application domain, analysis type, or modeling environment. As a drawback, a proliferation of languages exists, each one
with its own specific notation, tools, and domain specificity. Therefore, if a software architect has to model a concern not supported by his own language/tool, he has to manually transform (and eventually keep aligned) the available architectural specification into the required language/tool.
Our solution to this problem is DUALLY, an automated framework that allows architectural languages and tools interoperability. Given a number of architectural languages and tools, they can all interoperate thanks to
automated model transformation techniques. DUALLY is implemented as an Eclipse plugin.
For more information on the approach, you may refer to [IEEE_TSE2010]. For more information on the tool that supports the approach, please refer to "software" page on my website, or directly to the DUALLY website.
ALs extension and customization: despite the flourishing of languages to describe software architectures, existing ALs are still far away from what it is actually needed. In fact, while they support a traditional perception of an SA as a set of constituting elements (such as components, connectors and interfaces), they mostly fail to capture multiple stakeholders concerns and their design decisions that represent a broader view of SA being accepted today. Next generation ALs must cope with various and ever evolving stakeholder concerns by employing semantic extension mechanisms.
In our research we propose a framework, called BYADL – Build Your A(D)L, for developing a new generation of ALs. BYADL exploits model-driven techniques that provide the needed technologies to allow a software architect, starting from existing AL, to define its own new generation AL by: i) adding domain specificities, new architectural views, or analysis aspects, ii) integrating ALs with development processes and methodologies, and iii) customizing ALs by fine tuning them.
For more information on the approach, you may refer to [ICSE2010]. For more information on the tool that supports the approach, please refer to "software" page on my website, or directly to the BYADL website.
Architecture Design Decisions and Viewpoints: Architectural design decisions (i.e., those decisions made when architecting software systems) are considered an essential piece of knowledge to be carefully documented and maintained. As any other artifact, architectural design decisions may evolve, having an impact on other design decisions, or on related artifacts (like requirements and architectural elements). It is therefore important to document and analyze the impact of an evolving decision on other related decisions or artifacts. In our research work we propose an approach based on a notation-independent metamodel that becomes a means for systematically defining traceability links, enabling inter-decision and extra-decision evolution impact analysis. The purpose of such an analysis is to check the presence of inconsistencies that may occur during evolution. An Eclipse plugin has been realized to implement the approach.
For more information on the approach, you may refer to [SERENE 2011].
The use of multiple views has become standard practice in industry. Academic research and existing ALs have focused predominantly on the structural view (i.e. components and connectors) and sometimes on behaviour at the architectural level. They have offered limited support to address the needs of stakeholders with different concerns such as data management, safety, security, reliability and so on.
One consequence of the tenet of using multiple views is a growing body of viewpoints that have become available. A second consequence is the rise of architecture frameworks as coordinated sets of viewpoints. Current frameworks tend to be closed—as a result, (i) it is difficult to re-use viewpoints and concerns for defining new frameworks to be used in different organizations or domains; and (ii) it is impossible to define consistency rules among viewpoints once and forever, since such rules are not re-usable as the main artefacts themselves.
With the aim of taking a step towards the solution of these current limitations, the goal of this research is to provide an infrastructure, called MEGAF, for building reusable architecture frameworks.
For more information on the approach, you may refer to [ASE2010]. For more information on the tool that supports the approach, please refer to "software" page on my website, or directly to the MEGAF website.
In contemporary domains (e.g. logistics and health-care) dependability plays a crucial role, since failures can cause severe consequences and even endanger human life. Since Software Architectures offer a high-level system design they can contribute to improve the overall system dependability, providing a system blueprint that can be validated and that can guide the system development. In my research, I have been using SAs for three different purposes: a) for testing the system conformance to architectural descriptions and decisions, b) to assess the architectural description against expected behavioral scenarios, and c) to drive the monitoring of dynamically evolving systems.
Architecture-based Testing: this research deals with the use of an SA as a reference model for testing the conformance of an implemented system with respect to its architectural specification. We exploit the specification of SA dynamics to identify useful schemes of interactions between system components and to select test classes corresponding to relevant architectural behaviors. The SA dynamics is modeled by Labeled Transition Systems (LTSs). The approach consists of deriving suitable LTS abstractions called ALTSs. ALTSs offer specific views of SA dynamics by concentrating on relevant features and abstracting away from uninteresting ones. Intuitively, deriving an adequate set of test classes entails deriving a set of paths that appropriately cover the ALTS. Next, a relation between these abstract SA tests and more concrete, executable tests needs to be established, so that the architectural tests derived can be refined into code-level tests.
For more information on the approach, you may refer to [TSE2001] paper.
Formal Analysis of SA specifications: Charmy is a framework for designing and validating architectural specifications. Introduced in the early stages of the software development process, the Charmy framework assists the software architect in the design and validation phases. Once the model-based architectural design is completed, a prototype is automatically created for simulation and analysis purposes; desired properties, specified using a graphical notation close to UML sequence diagrams, are checked on the prototype by means of model checking techniques.
To make it useful in an industrial context, the framework relies on UML-based graphical notations hiding most of the complexity of the modeling and analysis process. Charmy has been used in different industrial studies and results have been collected and discussed.
For more information on the approach, you may refer to [IEEE TSE]. For more information on the tool that supports the approach, please refer to "software" page on my website, or directly to the CHARMY website.
Monitoring dynamically evolving systems: In run-time evolving systems, components may evolve while the system is being operated. Unsafe run-time changes may compromise the correct execution of the entire system. Traditional design-time verification techniques difficultly cope with run-time changes, and run-time monitoring may detect disfunctions only too late, when the failure arises. The desire would be to define advanced monitors with the ability to predict and prevent the potential errors happening in the future. In this direction, this paper proposes CASSANDRA, a new approach that by combining design-time and run-time analysis techniques, can “look ahead” in the near execution future, and predict potential failures. During run-time we on-the-fly construct a model of the future k-step global state space according to design-time specifications and the current execution state. Consequently, we can run-time check whether possible failures might happen in the future.
For more information on the approach, you may refer to the [ASE 2011] paper.
Global SE Education: To teach global software engineering (GSE) we devised a complementary distributed module with a shared project involving both local and international teams. In local teams students are located at the same university and trained in one of the two complementary topics. In international teams students are located at two different universities and trained in one of the two complementary topics. This study empirically investigates whether the students in the international teams can compensate the extra effort required to deal with communication, coordination, and collaboration issues that characterise GSE projects with learning by osmosis (i.e., by transferring knowledge among globally distributed teams trained on different topics).
The results show that there was no statistically significant difference between the performance of local and international teams. We assert that the students in the international and local teams perform equally well thanks to learning by osmosis. However, our analysis of the self-reported questionnaire data revealed that most of the participants (i.e., 70%) would like to work in local teams in real life project, 74% of the participants thought international teams were less efficient, and 41% of the participants reported lack of trust in their international team members compared with their local team members.
For more information on the approach, you may refer to the [Journal of Software Maintenance and Evolution: Research and Practice] paper.
Wireless Sensor Networks (WSN) are large and dense networks made up of low-data rate, short-range, low-cost and low-power (i.e. battery-operated) wireless components typically called
sensor nodes. A sensor node is a small digital device with very limited processing, communication and sensing capabilities. The sensor nodes are deployed (also in very large area) randomly or on
the basis of a precise planning in order to collect specific environmental information (e.g. temperature, light, pressure, movement, etc.) and to return results to a collection point by means of a wireless communication.
Architectural Description and Energy Simulation: Wireless Sensor Networks represent a very promising technology both in research and in practice. However, their development process still heavily relies on the technologies used to realize them (e.g., programming language, hardware platform, etc.) and demand for very speciï¬Āc skills of software developers. These peculiarities hamper the use of Wireless Sensor Networks in real applications, and slow down the growth of the Wireless Sensor Networks community. As claimed in many recent surveys on WSNs, the development of wireless sensor networks needs to be more abstract (i.e., developers should be able, if needed, to abstract from low-level details), it should be supported by veriï¬Ācation and validation techniques, and should be disciplined by frameworks with well-deï¬Āned processes.
In this context, we are proposing an architecture modelling framework to describe the software architecture of a WSN with a focus on nodes behaviour and communication properties. The framework is complemented with an interactive tool for both describing the physical environment in which the WSN will be deployed (i.e., walls, obstacles, and the material they are made of), and for virtually positioning the nodes in such environment. An internal engine simulates the virtually positioned WSN with respect to its path loss properties; this results as an accurate evaluation of the WSN lifetime in terms of the energy consumed for nodes’ communication.
For more information on the approach, you may refer to the [SESENA 2012] paper.
La ABEOCARE, start-up con sede in Svizzera, diretta da un ex-studente di Informatica, cerca neo-laureati e laureandi da inserire nell'organico. Il CEO di ABEOCARE sarà in ...
Dear students, be aware that today's Architecting Intelligent Systems class will be held in room C3.5, Coppito 2 building, at 4:30 pm. A Team has been also created for those of ...
Here the link to access lecture #2 of the Software Architectures course. ...