2023
|
ConferenceJoan Giner-Miguelez, Abel Gómez, Jordi Cabot DataDoc Analyzer: A Tool for Analyzing the Documentation of Scientific Datasets Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, CIKM '23 Association for Computing Machinery, Birmingham, United Kingdom, 2023, ISBN: 9798400701245. Abstract | Links | BibTeX | Tags: Datasets, explainability, Fairness, large language models, Machine learning, reverse engineering @conference{Giner-Miguelez:CIKM:2023,
title = {DataDoc Analyzer: A Tool for Analyzing the Documentation of Scientific Datasets},
author = {Joan Giner-Miguelez and Abel G\'{o}mez and Jordi Cabot},
doi = {10.1145/3583780.3614737},
isbn = {9798400701245},
year = {2023},
date = {2023-10-01},
booktitle = {Proceedings of the 32nd ACM International Conference on Information and Knowledge Management},
pages = {5046\textendash5050},
publisher = {Association for Computing Machinery},
address = {Birmingham, United Kingdom},
series = {CIKM '23},
abstract = {Recent public regulatory initiatives and relevant voices in the ML community have identified the need to document datasets according to several dimensions to ensure the fairness and trustworthiness of machine learning systems. In this sense, the data-sharing practices in the scientific field have been quickly evolving in the last years, with more and more research works publishing technical documentation together with the data for replicability purposes. However, this documentation is written in natural language, and its structure, content focus, and composition vary, making them challenging to analyze.We present DataDoc Analyzer, a tool for analyzing the documentation of scientific datasets by extracting the details of the main dimensions required to analyze the fairness and potential biases. We believe that our tool could help improve the quality of scientific datasets, aid dataset curators during its documentation process, and be a helpful tool for empirical studies on the overall quality of the datasets used in the ML field. The tool implements an ML pipeline that uses Large Language Models at its core for information retrieval. DataDoc is open-source, and a public demo is published online.},
keywords = {Datasets, explainability, Fairness, large language models, Machine learning, reverse engineering},
pubstate = {published},
tppubtype = {conference}
}
Recent public regulatory initiatives and relevant voices in the ML community have identified the need to document datasets according to several dimensions to ensure the fairness and trustworthiness of machine learning systems. In this sense, the data-sharing practices in the scientific field have been quickly evolving in the last years, with more and more research works publishing technical documentation together with the data for replicability purposes. However, this documentation is written in natural language, and its structure, content focus, and composition vary, making them challenging to analyze.We present DataDoc Analyzer, a tool for analyzing the documentation of scientific datasets by extracting the details of the main dimensions required to analyze the fairness and potential biases. We believe that our tool could help improve the quality of scientific datasets, aid dataset curators during its documentation process, and be a helpful tool for empirical studies on the overall quality of the datasets used in the ML field. The tool implements an ML pipeline that uses Large Language Models at its core for information retrieval. DataDoc is open-source, and a public demo is published online. Full Text AvailableOpen Access |
Journal ArticleJoan Giner-Miguelez, Abel Gómez, Jordi Cabot DescribeML: A dataset description tool for machine learning In: Science of Computer Programming, vol. 231, pp. 103030, 2023, ISSN: 0167-6423. Abstract | Links | BibTeX | Tags: Datasets, Domain-Specific Languages (DSLs), Fairness, Machine Learning (ML), Model-Driven Engineering (MDE), Software @article{Giner-Miguelez:SCICO:2024,
title = {DescribeML: A dataset description tool for machine learning},
author = {Joan Giner-Miguelez and Abel G\'{o}mez and Jordi Cabot},
doi = {10.1016/j.scico.2023.103030},
issn = {0167-6423},
year = {2023},
date = {2023-09-12},
urldate = {2024-01-01},
journal = {Science of Computer Programming},
volume = {231},
pages = {103030},
publisher = {Elsevier BV},
abstract = {Datasets are essential for training and evaluating machine learning models. However, they are also the root cause of many undesirable model behaviors, such as biased predictions. To address this issue, the machine learning community is proposing as a best practice the adoption of common guidelines for describing datasets. However, these guidelines are based on natural language descriptions of the dataset, hampering the automatic computation and analysis of such descriptions. To overcome this situation, we present DescribeML, a language engineering tool to precisely describe machine learning datasets in terms of their composition, provenance, and social concerns in a structured format. The tool is implemented as a Visual Studio Code extension.},
keywords = {Datasets, Domain-Specific Languages (DSLs), Fairness, Machine Learning (ML), Model-Driven Engineering (MDE), Software},
pubstate = {published},
tppubtype = {article}
}
Datasets are essential for training and evaluating machine learning models. However, they are also the root cause of many undesirable model behaviors, such as biased predictions. To address this issue, the machine learning community is proposing as a best practice the adoption of common guidelines for describing datasets. However, these guidelines are based on natural language descriptions of the dataset, hampering the automatic computation and analysis of such descriptions. To overcome this situation, we present DescribeML, a language engineering tool to precisely describe machine learning datasets in terms of their composition, provenance, and social concerns in a structured format. The tool is implemented as a Visual Studio Code extension. Full Text AvailableOpen Access |
Journal ArticleJoan Giner-Miguelez, Abel Gómez, Jordi Cabot A domain-specific language for describing machine learning datasets In: Journal of Computer Languages, vol. 76, pp. 101209, 2023, ISSN: 2590-1184. Abstract | Links | BibTeX | Tags: Datasets, Domain-specific languages, Fairness, Machine learning, MDE @article{Giner-Miguelez:COLA:2023,
title = {A domain-specific language for describing machine learning datasets},
author = {Joan Giner-Miguelez and Abel G\'{o}mez and Jordi Cabot},
doi = {10.1016/j.cola.2023.101209},
issn = {2590-1184},
year = {2023},
date = {2023-08-01},
urldate = {2023-08-01},
journal = {Journal of Computer Languages},
volume = {76},
pages = {101209},
abstract = {Datasets are essential for training and evaluating machine learning (ML) models. However, they are also at the root of many undesirable model behaviors, such as biased predictions. To address this issue, the machine learning community is proposing a data-centric cultural shift, where data issues are given the attention they deserve and more standard practices for gathering and describing datasets are discussed and established. So far, these proposals are mostly high-level guidelines described in natural language and, as such, they are difficult to formalize and apply to particular datasets. In this sense, and inspired by these proposals, we define a new domain-specific language (DSL) to precisely describe machine learning datasets in terms of their structure, provenance, and social concerns. We believe this DSL will facilitate any ML initiative to leverage and benefit from this data-centric shift in ML (e.g., selecting the most appropriate dataset for a new project or better replicating other ML results). The DSL is implemented as a Visual Studio Code plugin, and it has been published under an open-source license.},
keywords = {Datasets, Domain-specific languages, Fairness, Machine learning, MDE},
pubstate = {published},
tppubtype = {article}
}
Datasets are essential for training and evaluating machine learning (ML) models. However, they are also at the root of many undesirable model behaviors, such as biased predictions. To address this issue, the machine learning community is proposing a data-centric cultural shift, where data issues are given the attention they deserve and more standard practices for gathering and describing datasets are discussed and established. So far, these proposals are mostly high-level guidelines described in natural language and, as such, they are difficult to formalize and apply to particular datasets. In this sense, and inspired by these proposals, we define a new domain-specific language (DSL) to precisely describe machine learning datasets in terms of their structure, provenance, and social concerns. We believe this DSL will facilitate any ML initiative to leverage and benefit from this data-centric shift in ML (e.g., selecting the most appropriate dataset for a new project or better replicating other ML results). The DSL is implemented as a Visual Studio Code plugin, and it has been published under an open-source license. Full Text AvailableOpen Access |
Book ChapterAbel Gómez, Christophe Joubert, Jordi Cabot Blockchain Technologies in the Design and Operation of Cyber-Physical Systems In: Birgit Vogel-Heuser; Manuel Wimmer (Ed.): Digital Transformation: Core Technologies and Emerging Topics from a Computer Science Perspective, pp. 223–243, Springer Berlin Heidelberg, Berlin, Heidelberg, 2023, ISBN: 978-3-662-65004-2. Abstract | Links | BibTeX | Tags: Blockchain, Industry 4.0 @inbook{Gomez:2023,
title = {Blockchain Technologies in the Design and Operation of Cyber-Physical Systems},
author = {Abel G\'{o}mez and Christophe Joubert and Jordi Cabot},
editor = {Birgit Vogel-Heuser and Manuel Wimmer},
doi = {10.1007/978-3-662-65004-2_9},
isbn = {978-3-662-65004-2},
year = {2023},
date = {2023-02-03},
urldate = {2023-02-03},
booktitle = {Digital Transformation: Core Technologies and Emerging Topics from a Computer Science Perspective},
pages = {223--243},
publisher = {Springer Berlin Heidelberg},
address = {Berlin, Heidelberg},
abstract = {\"{A} blockchain is an open, distributed ledger that can record transactions between two parties in an efficient, verifiable, and permanent way. Once recorded in a block, the transaction data cannot be altered retroactively. Moreover, smart contracts can be put in place to ensure that any new data added to the blockchain respects the terms of an agreement between the involved parties. As such, the blockchain becomes the single source of truth for all stakeholders in the system."},
keywords = {Blockchain, Industry 4.0},
pubstate = {published},
tppubtype = {inbook}
}
Ä blockchain is an open, distributed ledger that can record transactions between two parties in an efficient, verifiable, and permanent way. Once recorded in a block, the transaction data cannot be altered retroactively. Moreover, smart contracts can be put in place to ensure that any new data added to the blockchain respects the terms of an agreement between the involved parties. As such, the blockchain becomes the single source of truth for all stakeholders in the system." |
2022
|
ConferenceJoan Giner-Miguelez, Abel Gómez, Jordi Cabot DescribeML: A Tool for Describing Machine Learning Datasets Proceedings of the 25th International Conference on Model Driven Engineering Languages and Systems: Companion Proceedings, MODELS '22 Association for Computing Machinery, Montreal, Quebec, Canada, 2022, ISBN: 9781450394673. Abstract | Links | BibTeX | Tags: Datasets, DescribeML, Domain-Specific Languages (DSLs), Fairness, Model-Driven Engineering (MDE) @conference{Giner-Miguelez:MODELS:2022,
title = {DescribeML: A Tool for Describing Machine Learning Datasets},
author = {Joan Giner-Miguelez and Abel G\'{o}mez and Jordi Cabot},
doi = {10.1145/3550356.3559087},
isbn = {9781450394673},
year = {2022},
date = {2022-11-09},
urldate = {2022-01-01},
booktitle = {Proceedings of the 25th International Conference on Model Driven Engineering Languages and Systems: Companion Proceedings},
pages = {22\textendash26},
publisher = {Association for Computing Machinery},
address = {Montreal, Quebec, Canada},
series = {MODELS '22},
abstract = {Datasets play a central role in the training and evaluation of machine learning (ML) models. But they are also the root cause of many undesired model behaviors, such as biased predictions. To overcome this situation, the ML community is proposing a data-centric cultural shift, where data issues are given the attention they deserve, for instance, proposing standard descriptions for datasets.In this sense, and inspired by these proposals, we present a model-driven tool to precisely describe machine learning datasets in terms of their structure, data provenance, and social concerns. Our tool aims to facilitate any ML initiative to leverage and benefit from this data-centric shift in ML (e.g., selecting the most appropriate dataset for a new project or better replicating other ML results). The tool is implemented with the Langium workbench as a Visual Studio Code plugin and published as an open-source.},
keywords = {Datasets, DescribeML, Domain-Specific Languages (DSLs), Fairness, Model-Driven Engineering (MDE)},
pubstate = {published},
tppubtype = {conference}
}
Datasets play a central role in the training and evaluation of machine learning (ML) models. But they are also the root cause of many undesired model behaviors, such as biased predictions. To overcome this situation, the ML community is proposing a data-centric cultural shift, where data issues are given the attention they deserve, for instance, proposing standard descriptions for datasets.In this sense, and inspired by these proposals, we present a model-driven tool to precisely describe machine learning datasets in terms of their structure, data provenance, and social concerns. Our tool aims to facilitate any ML initiative to leverage and benefit from this data-centric shift in ML (e.g., selecting the most appropriate dataset for a new project or better replicating other ML results). The tool is implemented with the Langium workbench as a Visual Studio Code plugin and published as an open-source. |
ConferenceAbel Gómez, Iván Alfonso, Javier Coronel, María Deseada Esclapez, Javier Ferrer TRANSACT: Towards safe and secure distributed cyber-physical systems Actas de las XXVI Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2022), SISTEDES, 2022. Abstract | Links | BibTeX | Tags: Critical Systems, Cyber-Physical Systems (CPS), Distributed Systems, Safety, Security @conference{G\'{o}mez:JISBD:2022:TRANSACT,
title = {TRANSACT: Towards safe and secure distributed cyber-physical systems},
author = {Abel G\'{o}mez and Iv\'{a}n Alfonso and Javier Coronel and Mar\'{i}a Deseada Esclapez and Javier Ferrer},
editor = {A. Go\~{n}i Sarriguren},
url = {http://hdl.handle.net/11705/JISBD/2022/5735},
year = {2022},
date = {2022-09-01},
urldate = {2022-09-01},
booktitle = {Actas de las XXVI Jornadas de Ingenier\'{i}a del Software y Bases de Datos (JISBD 2022)},
publisher = {SISTEDES},
abstract = {Cyber-physical systems (CPS) are all around us, but due to today’s technical limitations and the possibility of human error, we cannot yet tap into their full potential. The EU-funded TRANSACT project aims to develop a universal distributed solution architecture for the transformation of safety-critical CPS from local, stand-alone systems into safe and secure distributed solutions. To that end, TRANSACT will research distributed reference architectures for safety-critical CPS that rely on edge and cloud computing, ensuring that performance, safety, security, and data privacy are guaranteed. Furthermore, by integrating AI services into distributed CPS, TRANSACT will enable the fast development of innovative value-based services and business models.},
keywords = {Critical Systems, Cyber-Physical Systems (CPS), Distributed Systems, Safety, Security},
pubstate = {published},
tppubtype = {conference}
}
Cyber-physical systems (CPS) are all around us, but due to today’s technical limitations and the possibility of human error, we cannot yet tap into their full potential. The EU-funded TRANSACT project aims to develop a universal distributed solution architecture for the transformation of safety-critical CPS from local, stand-alone systems into safe and secure distributed solutions. To that end, TRANSACT will research distributed reference architectures for safety-critical CPS that rely on edge and cloud computing, ensuring that performance, safety, security, and data privacy are guaranteed. Furthermore, by integrating AI services into distributed CPS, TRANSACT will enable the fast development of innovative value-based services and business models. Full Text AvailableOpen Access |
Journal ArticleAbel Gómez, Markel Iglesias-Urkia, Lorea Belategi, Xabier Mendialdua, Jordi Cabot Model-driven development of asynchronous message-driven architectures with AsyncAPI In: Software and Systems Modeling, vol. 21, pp. 1583–1611 , 2022. Abstract | Links | BibTeX | Tags: AsyncAPI, Cyber-Physical Systems (CPS), Internet of Things (IoT), Publish-Subscribe @article{Gomez:SoSym:2021,
title = {Model-driven development of asynchronous message-driven architectures with AsyncAPI},
author = {Abel G\'{o}mez and Markel Iglesias-Urkia and Lorea Belategi and Xabier Mendialdua and Jordi Cabot},
doi = {10.1007/s10270-021-00945-3},
year = {2022},
date = {2022-08-01},
urldate = {2022-08-01},
journal = {Software and Systems Modeling},
volume = {21},
pages = {1583\textendash1611 },
publisher = {Springer Science and Business Media LLC},
abstract = {In the Internet-of-Things (IoT) vision, everyday objects evolve into cyber-physical systems. The massive use and deployment of these systems has given place to the Industry 4.0 or Industrial IoT (IIoT). Due to its scalability requirements, IIoT architectures are typically distributed and asynchronous. In this scenario, one of the most widely used paradigms is publish/subscribe, where messages are sent and received based on a set of categories or topics. However, these architectures face interoperability challenges. Consistency in message categories and structure is the key to avoid potential losses of information. Ensuring this consistency requires complex data processing logic both on the publisher and the subscriber sides. In this paper, we present our proposal relying on AsyncAPI to automate the design and implementation of these asynchronous architectures using model-driven techniques for the generation of (part of) message-driven infrastructures. Our proposal offers two different ways of designing the architectures: either graphically, by modeling and annotating the messages that are sent among the different IoT devices, or textually, by implementing an editor compliant with the AsyncAPI specification. We have evaluated our proposal by conducting a set of experiments with 25 subjects with different expertise and background. The experiments show that one-third of the subjects were able to design and implement a working architecture in less than an hour without previous knowledge of our proposal, and an additional one-third estimated that they would only need less than two hours in total.},
keywords = {AsyncAPI, Cyber-Physical Systems (CPS), Internet of Things (IoT), Publish-Subscribe},
pubstate = {published},
tppubtype = {article}
}
In the Internet-of-Things (IoT) vision, everyday objects evolve into cyber-physical systems. The massive use and deployment of these systems has given place to the Industry 4.0 or Industrial IoT (IIoT). Due to its scalability requirements, IIoT architectures are typically distributed and asynchronous. In this scenario, one of the most widely used paradigms is publish/subscribe, where messages are sent and received based on a set of categories or topics. However, these architectures face interoperability challenges. Consistency in message categories and structure is the key to avoid potential losses of information. Ensuring this consistency requires complex data processing logic both on the publisher and the subscriber sides. In this paper, we present our proposal relying on AsyncAPI to automate the design and implementation of these asynchronous architectures using model-driven techniques for the generation of (part of) message-driven infrastructures. Our proposal offers two different ways of designing the architectures: either graphically, by modeling and annotating the messages that are sent among the different IoT devices, or textually, by implementing an editor compliant with the AsyncAPI specification. We have evaluated our proposal by conducting a set of experiments with 25 subjects with different expertise and background. The experiments show that one-third of the subjects were able to design and implement a working architecture in less than an hour without previous knowledge of our proposal, and an additional one-third estimated that they would only need less than two hours in total. Full Text AvailableOpen Access |
Conference Joan Giner-Miguelez, Abel Gómez, Jordi Cabot Enabling Content Management Systems as an Information Source in Model-Driven Projects Research Challenges in Information Science. RCIS 2022., Lecture Notes in Business Information Processing Springer International Publishing, Cham, 2022, ISBN: 978-3-031-05760-1. Abstract | Links | BibTeX | Tags: Datasets, Domain-Specific Languages (DSLs), Machine Learning (ML), MLOPs @conference{Giner-Miguelez:RCIS:2022,
title = {Enabling Content Management Systems as an Information Source in Model-Driven Projects},
author = { Joan Giner-Miguelez and Abel G\'{o}mez and Jordi Cabot},
editor = { Renata Guizzardi and Jolita Ralyt\'{e} and Xavier Franch},
doi = {10.1007/978-3-031-05760-1_30},
isbn = {978-3-031-05760-1},
year = {2022},
date = {2022-05-11},
urldate = {2022-05-11},
booktitle = {Research Challenges in Information Science. RCIS 2022.},
pages = {513--528},
publisher = {Springer International Publishing},
address = {Cham},
series = {Lecture Notes in Business Information Processing},
abstract = {Content Management Systems (CMSs) are the most popular tool when it comes to create and publish content across the web. Recently, CMSs have evolved, becoming headless. Content served by a headless CMS aims to be consumed by other applications and services through REST APIs rather than by human users through a web browser. This evolution has enabled CMSs to become a notorious source of content to be used in a variety of contexts beyond pure web navigation. As such, CMS have become an important component of many information systems. Unfortunately, we still lack the tools to properly discover and manage the information stored in a CMS, often highly customized to the needs of a specific domain. Currently, this is mostly a time-consuming and error-prone manual process.},
keywords = {Datasets, Domain-Specific Languages (DSLs), Machine Learning (ML), MLOPs},
pubstate = {published},
tppubtype = {conference}
}
Content Management Systems (CMSs) are the most popular tool when it comes to create and publish content across the web. Recently, CMSs have evolved, becoming headless. Content served by a headless CMS aims to be consumed by other applications and services through REST APIs rather than by human users through a web browser. This evolution has enabled CMSs to become a notorious source of content to be used in a variety of contexts beyond pure web navigation. As such, CMS have become an important component of many information systems. Unfortunately, we still lack the tools to properly discover and manage the information stored in a CMS, often highly customized to the needs of a specific domain. Currently, this is mostly a time-consuming and error-prone manual process. |
Journal ArticleSimona Bernardi, Abel Gómez, José Merseguer, Diego Perez-Palacin, José I. Requeno DICE simulation: a tool for software performance assessment at the design stage In: Automated Software Engineering, vol. 29, pp. 36, 2022, ISSN: 1573-7535. Abstract | Links | BibTeX | Tags: Data-Intensive Applications (DIA), DICE, Model-Driven Engineering (MDE), performance evaluation tools, software performance, Unified Modeling Language (UML) @article{Bernardi:AUSE:2022,
title = {DICE simulation: a tool for software performance assessment at the design stage},
author = {Simona Bernardi and Abel G\'{o}mez and Jos\'{e} Merseguer and Diego Perez-Palacin and Jos\'{e} I. Requeno},
url = {https://rdcu.be/cJ2Wt},
doi = {10.1007/s10515-022-00335-z},
issn = {1573-7535},
year = {2022},
date = {2022-03-28},
urldate = {2022-03-28},
journal = {Automated Software Engineering},
volume = {29},
pages = {36},
abstract = {In recent years, we have seen many performance fiascos in the deployment of new systems, such as the US health insurance web. This paper describes the functionality and architecture, as well as success stories, of a tool that helps address these types of issues. The tool allows assessing software designs regarding quality, in particular performance and reliability. Starting from a UML design with quality annotations, the tool applies model-transformation techniques to yield analyzable models. Such models are then leveraged by the tool to compute quality metrics. Finally, quality results, over the design, are presented to the engineer, in terms of the problem domain. Hence, the tool is an asset for the software engineer to evaluate system quality through software designs. While leveraging the Eclipse platform, the tool uses UML and the MARTE, DAM and DICE profiles for the system design and the quality modeling.},
keywords = {Data-Intensive Applications (DIA), DICE, Model-Driven Engineering (MDE), performance evaluation tools, software performance, Unified Modeling Language (UML)},
pubstate = {published},
tppubtype = {article}
}
In recent years, we have seen many performance fiascos in the deployment of new systems, such as the US health insurance web. This paper describes the functionality and architecture, as well as success stories, of a tool that helps address these types of issues. The tool allows assessing software designs regarding quality, in particular performance and reliability. Starting from a UML design with quality annotations, the tool applies model-transformation techniques to yield analyzable models. Such models are then leveraged by the tool to compute quality metrics. Finally, quality results, over the design, are presented to the engineer, in terms of the problem domain. Hence, the tool is an asset for the software engineer to evaluate system quality through software designs. While leveraging the Eclipse platform, the tool uses UML and the MARTE, DAM and DICE profiles for the system design and the quality modeling. Full Text AvailableOpen Access |
Journal ArticleHugo Bruneliere, Vittoriano Muttillo, Romina Eramo, Luca Berardinelli, Abel Gómez, Alessandra Bagnato, Andrey Sadovykh, Antonio Cicchetti AIDOaRt: AI-augmented Automation for DevOps, a model-based framework for continuous development in Cyber–Physical Systems In: Microprocessors and Microsystems, vol. 94, pp. 104672, 2022, ISSN: 0141-9331. Abstract | Links | BibTeX | Tags: AIOps, Artificial Intelligence, Continuous development, Cyber–Physical Systems, DevOps, Model Driven Engineering, Software engineering, System engineering @article{Bruneliere:MICPRO:2022,
title = {AIDOaRt: AI-augmented Automation for DevOps, a model-based framework for continuous development in Cyber\textendashPhysical Systems},
author = {Hugo Bruneliere and Vittoriano Muttillo and Romina Eramo and Luca Berardinelli and Abel G\'{o}mez and Alessandra Bagnato and Andrey Sadovykh and Antonio Cicchetti},
doi = {10.1016/j.micpro.2022.104672},
issn = {0141-9331},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {Microprocessors and Microsystems},
volume = {94},
pages = {104672},
abstract = {The advent of complex Cyber\textendashPhysical Systems (CPSs) creates the need for more efficient engineering processes. Recently, DevOps promoted the idea of considering a closer continuous integration between system development (including its design) and operational deployment. Despite their use being still currently limited, Artificial Intelligence (AI) techniques are suitable candidates for improving such system engineering activities (cf. AIOps). In this context, AIDOaRT is a large European collaborative project that aims at providing AI-augmented automation capabilities to better support the modeling, coding, testing, monitoring, and continuous development of CPSs. The project proposes to combine Model Driven Engineering principles and techniques with AI-enhanced methods and tools for engineering more trustable CPSs. The resulting framework will (1) enable the dynamic observation and analysis of system data collected at both runtime and design time and (2) provide dedicated AI-augmented solutions that will then be validated in concrete industrial cases. This paper describes the main research objectives and underlying paradigms of the AIDOaRt project. It also introduces the conceptual architecture and proposed approach of the AIDOaRt overall solution. Finally, it reports on the actual project practices and discusses the current results and future plans.},
keywords = {AIOps, Artificial Intelligence, Continuous development, Cyber\textendashPhysical Systems, DevOps, Model Driven Engineering, Software engineering, System engineering},
pubstate = {published},
tppubtype = {article}
}
The advent of complex Cyber–Physical Systems (CPSs) creates the need for more efficient engineering processes. Recently, DevOps promoted the idea of considering a closer continuous integration between system development (including its design) and operational deployment. Despite their use being still currently limited, Artificial Intelligence (AI) techniques are suitable candidates for improving such system engineering activities (cf. AIOps). In this context, AIDOaRT is a large European collaborative project that aims at providing AI-augmented automation capabilities to better support the modeling, coding, testing, monitoring, and continuous development of CPSs. The project proposes to combine Model Driven Engineering principles and techniques with AI-enhanced methods and tools for engineering more trustable CPSs. The resulting framework will (1) enable the dynamic observation and analysis of system data collected at both runtime and design time and (2) provide dedicated AI-augmented solutions that will then be validated in concrete industrial cases. This paper describes the main research objectives and underlying paradigms of the AIDOaRt project. It also introduces the conceptual architecture and proposed approach of the AIDOaRt overall solution. Finally, it reports on the actual project practices and discusses the current results and future plans. |
ConferenceJoan Giner-Miguelez, Abel Gómez, Jordi Cabot Un lenguaje para definir datasets para machine learning Actas de las XXVI Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2022), SISTEDES, 2022. Abstract | Links | BibTeX | Tags: Datasets, Domain-Specific Languages (DSLs), Machine Learning (ML), MLOPs @conference{Giner-Miguelez:JISBD:2022,
title = {Un lenguaje para definir datasets para machine learning},
author = {Joan Giner-Miguelez and Abel G\'{o}mez and Jordi Cabot},
editor = {A. Go\~{n}i Sarriguren},
url = {http://hdl.handle.net/11705/JISBD/2022/4368},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
booktitle = {Actas de las XXVI Jornadas de Ingenier\'{i}a del Software y Bases de Datos (JISBD 2022)},
publisher = {SISTEDES},
abstract = {Recientes estudios han reportado efectos indeseados y nocivos en modelos de machine learning (ML), en gran parte causados por problemas o limitaciones en los datasets usados para entrenarlos. Esta situaci\'{o}n ha despertado el inter\'{e}s dentro de la comunidad de ML para mejorar los procesos de creaci\'{o}n y compartici\'{o}n de datasets. Sin embargo, hasta la fecha, las propuestas para estandarizar la descripci\'{o}n y formalizaci\'{o}n de los mismos se basan en gu\'{i}as generales en texto natural y que, como tales, presentan limitaciones (precisi\'{o}n, ambig+APw-edad, etc.) y son dif\'{i}ciles de aplicar de una forma (semi)automatizada.En este trabajo proponemos un lenguaje espec\'{i}fico de dominio para describir datasets basado en las propuestas mencionadas. Este lenguaje contribuye a estandarizar los procesos de descripci\'{o}n de los datasets, y pretende ser la base para aplicaciones de formalizaci\'{o}n, b\'{u}squeda y comparaci\'{o}n de estos. Finalmente, presentamos la implementaci\'{o}n de este lenguaje en forma de plug-in para Visual Studio Code.},
keywords = {Datasets, Domain-Specific Languages (DSLs), Machine Learning (ML), MLOPs},
pubstate = {published},
tppubtype = {conference}
}
Recientes estudios han reportado efectos indeseados y nocivos en modelos de machine learning (ML), en gran parte causados por problemas o limitaciones en los datasets usados para entrenarlos. Esta situación ha despertado el interés dentro de la comunidad de ML para mejorar los procesos de creación y compartición de datasets. Sin embargo, hasta la fecha, las propuestas para estandarizar la descripción y formalización de los mismos se basan en guías generales en texto natural y que, como tales, presentan limitaciones (precisión, ambig+APw-edad, etc.) y son difíciles de aplicar de una forma (semi)automatizada.En este trabajo proponemos un lenguaje específico de dominio para describir datasets basado en las propuestas mencionadas. Este lenguaje contribuye a estandarizar los procesos de descripción de los datasets, y pretende ser la base para aplicaciones de formalización, búsqueda y comparación de estos. Finalmente, presentamos la implementación de este lenguaje en forma de plug-in para Visual Studio Code. Full Text AvailableOpen AccessSpanish |
ConferenceAbel Gómez, Jordi Cabot, Xavier Pi Hacia la (semi)automatización en la Industria 4.0 mediante UML y AsyncAPI Actas de las XXVI Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2022), SISTEDES, 2022. Abstract | Links | BibTeX | Tags: AsyncAPI, Industry, Model Transformation (MT), Publish-Subscribe, UML Profiles, Unified Modeling Language (UML) @conference{Gomez:JISBD:2022,
title = {Hacia la (semi)automatizaci\'{o}n en la Industria 4.0 mediante UML y AsyncAPI},
author = {Abel G\'{o}mez and Jordi Cabot and Xavier Pi},
editor = {A. Go\~{n}i Sarriguren},
url = {http://hdl.handle.net/11705/JISBD/2022/572},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
booktitle = {Actas de las XXVI Jornadas de Ingenier\'{i}a del Software y Bases de Datos (JISBD 2022)},
publisher = {SISTEDES},
abstract = {El uso y despliegue de los llamados sistemas ciberf\'{i}sicos ha calado profundamente en la industria, dando lugar a la Industria 4.0. T\'{i}picamente, las arquitecturas de la Industria 4.0 muestran un acoplamiento muy bajo entre sus componentes, siendo distribuidas, as\'{i}ncronas, y gui\'{a}ndose la comunicaci\'{o}n por eventos. Estas caracter\'{i}sticas, diferentes de las de arquitecturas que hasta ahora eran el foco de las t\'{e}cnicas de modelado, conllevan la necesidad de dotar a la Industria 4.0 de nuevos lenguajes y herramientas que permitan un desarrollo m\'{a}s eficiente y preciso. En este art\'{i}culo, proponemos el uso de UML para el modelado de este tipo de arquitecturas y una serie de transformaciones que permiten automatizar su procesamiento. M\'{a}s concretamente, presentamos un perfil UML para la Industria 4.0, as+AO0 como una transformaci\'{o}n de modelos capaz de generar una descripci\'{o}n abstracta \textemdashempleando la especificaci\'{o}n AsyncAPI\textemdash de las interfaces de programaci\'{o}n que subyacen a la arquitectura. A partir de dicha descripci\'{o}n abstracta en AsyncAPI, generamos el c\'{o}digo que dan soporte a dichas interfaces de forma autom\'{a}tica.},
keywords = {AsyncAPI, Industry, Model Transformation (MT), Publish-Subscribe, UML Profiles, Unified Modeling Language (UML)},
pubstate = {published},
tppubtype = {conference}
}
El uso y despliegue de los llamados sistemas ciberfísicos ha calado profundamente en la industria, dando lugar a la Industria 4.0. Típicamente, las arquitecturas de la Industria 4.0 muestran un acoplamiento muy bajo entre sus componentes, siendo distribuidas, asíncronas, y guiándose la comunicación por eventos. Estas características, diferentes de las de arquitecturas que hasta ahora eran el foco de las técnicas de modelado, conllevan la necesidad de dotar a la Industria 4.0 de nuevos lenguajes y herramientas que permitan un desarrollo más eficiente y preciso. En este artículo, proponemos el uso de UML para el modelado de este tipo de arquitecturas y una serie de transformaciones que permiten automatizar su procesamiento. Más concretamente, presentamos un perfil UML para la Industria 4.0, as+AO0 como una transformación de modelos capaz de generar una descripción abstracta —empleando la especificación AsyncAPI— de las interfaces de programación que subyacen a la arquitectura. A partir de dicha descripción abstracta en AsyncAPI, generamos el código que dan soporte a dichas interfaces de forma automática. Full Text AvailableOpen AccessSpanish |
2021
|
ConferenceAbel Gómez, Xabier Mendialdua, Konstantinos Barmpis, Gábor Bergmann, Jordi Cabot, Xabier de Carlos, Csaba Debreceni, Antonio Garmendia, Dimitrios S. Kolovos, Juan de Lara Scalable Modeling Technologies in the Wild: An Experience Report on Wind Turbines Control Applications Development (Abstract) Actas de las XXV Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2021), Málaga, septiembre de 2021., Sistedes Sistedes, 2021. Abstract | Links | BibTeX | Tags: Experience Report, Model-Driven Engineering (MDE), MONDO, Wind Turbine (WT) @conference{G\'{o}mez:JISBD:2021,
title = {Scalable Modeling Technologies in the Wild: An Experience Report on Wind Turbines Control Applications Development (Abstract)},
author = {Abel G\'{o}mez and Xabier Mendialdua and Konstantinos Barmpis and G\'{a}bor Bergmann and Jordi Cabot and Xabier de Carlos and Csaba Debreceni and Antonio Garmendia and Dimitrios S. Kolovos and Juan de Lara},
editor = {Silvia Abrah\~{a}o},
url = {http://hdl.handle.net/11705/JISBD/2021/075},
year = {2021},
date = {2021-09-22},
urldate = {2021-09-22},
booktitle = {Actas de las XXV Jornadas de Ingenier\'{i}a del Software y Bases de Datos (JISBD 2021), M\'{a}laga, septiembre de 2021.},
publisher = {Sistedes},
organization = {Sistedes},
abstract = {Scalability in modeling has many facets, including the ability to build larger models and domain-specific languages (DSLs) efficiently. With the aim of tackling some of the most prominent scalability challenges in model-based engineering (MBE), the MONDO EU project developed the theoretical foundations and open-source implementation of a platform for scalable modeling and model management. The platform includes facilities for building large graphical DSLs, for splitting large models into sets of smaller interrelated fragments, to index large collections of models to speed-up their querying, and to enable the collaborative construction and refinement of complex models, among other features. This paper reports on the tools provided by MONDO that Ikerlan, a medium-sized technology center which in the last decade has embraced the MBE paradigm, adopted in order to improve their processes. This experience produced as a result a set of model editors and related technologies that fostered collaboration and scalability in the development of wind turbine control applications. In order to evaluate the benefits obtained, an on-site evaluation of the tools was performed. This evaluation shows that scalable MBE technologies give new growth opportunities to small- and medium-sized organizations.},
keywords = {Experience Report, Model-Driven Engineering (MDE), MONDO, Wind Turbine (WT)},
pubstate = {published},
tppubtype = {conference}
}
Scalability in modeling has many facets, including the ability to build larger models and domain-specific languages (DSLs) efficiently. With the aim of tackling some of the most prominent scalability challenges in model-based engineering (MBE), the MONDO EU project developed the theoretical foundations and open-source implementation of a platform for scalable modeling and model management. The platform includes facilities for building large graphical DSLs, for splitting large models into sets of smaller interrelated fragments, to index large collections of models to speed-up their querying, and to enable the collaborative construction and refinement of complex models, among other features. This paper reports on the tools provided by MONDO that Ikerlan, a medium-sized technology center which in the last decade has embraced the MBE paradigm, adopted in order to improve their processes. This experience produced as a result a set of model editors and related technologies that fostered collaboration and scalability in the development of wind turbine control applications. In order to evaluate the benefits obtained, an on-site evaluation of the tools was performed. This evaluation shows that scalable MBE technologies give new growth opportunities to small- and medium-sized organizations. Abstract |
Conference Romina Eramo, Vittoriano Muttillo, Luca Berardinelli, Hugo Bruneliere, Abel Gómez, Alessandra Bagnato, Andrey Sadovykh, Antonio Cicchetti AIDOaRt: AI-augmented Automation for DevOps, a Model-based Framework for Continuous Development in Cyber-Physical Systems 2021 24th Euromicro Conference on Digital System Design (DSD), IEEE, 2021, ISBN: 978-1-6654-2703-6. Abstract | Links | BibTeX | Tags: AIOps, Artificial Inteligence (AI), Continuous System Engineering, Cyber-Physical Systems (CPS), DevOps @conference{Eramo:DSD:2021,
title = {AIDOaRt: AI-augmented Automation for DevOps, a Model-based Framework for Continuous Development in Cyber-Physical Systems},
author = { Romina Eramo and Vittoriano Muttillo and Luca Berardinelli and Hugo Bruneliere and Abel G\'{o}mez and Alessandra Bagnato and Andrey Sadovykh and Antonio Cicchetti},
doi = {10.1109/DSD53832.2021.00053},
isbn = {978-1-6654-2703-6},
year = {2021},
date = {2021-09-01},
urldate = {2021-09-01},
booktitle = {2021 24th Euromicro Conference on Digital System Design (DSD)},
pages = {303-310},
publisher = {IEEE},
abstract = {With the emergence of Cyber-Physical Systems (CPS), the increasing complexity in development and operation demands for an efficient engineering process. In the recent years DevOps promotes closer continuous integration of system development and its operational deployment perspectives. In this context, the use of Artificial Intelligence (AI) is beneficial to improve the system design and integration activities, however, it is still limited despite its high potential. AIDOaRT is a 3 years long H2020-ECSEL European project involving 32 organizations, grouped in clusters from 7 different countries, focusing on AI-augmented automation supporting modelling, coding, testing, monitoring and continuous development of Cyber-Physical Systems (CPS). The project proposes to apply Model-Driven Engineering (MDE) principles and techniques to provide a framework offering proper AI-enhanced methods and related tooling for building trustable CPSs. The framework is intended to work within the DevOps practices combining software development and information technology (IT) operations. In this regard, the project points at enabling AI for IT operations (AIOps) to auto-mate decision making process and complete system development tasks. This paper presents an overview of the project with the aim to discuss context, objectives and the proposed approach.},
keywords = {AIOps, Artificial Inteligence (AI), Continuous System Engineering, Cyber-Physical Systems (CPS), DevOps},
pubstate = {published},
tppubtype = {conference}
}
With the emergence of Cyber-Physical Systems (CPS), the increasing complexity in development and operation demands for an efficient engineering process. In the recent years DevOps promotes closer continuous integration of system development and its operational deployment perspectives. In this context, the use of Artificial Intelligence (AI) is beneficial to improve the system design and integration activities, however, it is still limited despite its high potential. AIDOaRT is a 3 years long H2020-ECSEL European project involving 32 organizations, grouped in clusters from 7 different countries, focusing on AI-augmented automation supporting modelling, coding, testing, monitoring and continuous development of Cyber-Physical Systems (CPS). The project proposes to apply Model-Driven Engineering (MDE) principles and techniques to provide a framework offering proper AI-enhanced methods and related tooling for building trustable CPSs. The framework is intended to work within the DevOps practices combining software development and information technology (IT) operations. In this regard, the project points at enabling AI for IT operations (AIOps) to auto-mate decision making process and complete system development tasks. This paper presents an overview of the project with the aim to discuss context, objectives and the proposed approach. |
Conference Jordi Cabot, Hugo Bruneliere, Gwendal Daniel, Abel Gómez All Researchers Should Become Entrepreneurs 2021 IEEE/ACM 8th International Workshop on Software Engineering Research and Industrial Practice (SER IP), 2021, ISBN: 978-1-6654-4476-7. Abstract | Links | BibTeX | Tags: Entrepreneurship, Industry, Research Transfer @conference{Cabot:SERIP:2021,
title = {All Researchers Should Become Entrepreneurs},
author = { Jordi Cabot and Hugo Bruneliere and Gwendal Daniel and Abel G\'{o}mez},
doi = {10.1109/SER-IP52554.2021.00019},
isbn = {978-1-6654-4476-7},
year = {2021},
date = {2021-01-01},
urldate = {2021-01-01},
booktitle = {2021 IEEE/ACM 8th International Workshop on Software Engineering Research and Industrial Practice (SER IP)},
pages = {73-74},
abstract = {We often complain about the challenges associated with a fruitful research-industry collaboration. The coronavirus pandemic has just aggravated them as, clearly, companies face difficult times and have mostly paused their R\&I activities. In this context, we propose that researchers become entrepreneurs and play both roles at the same time. Right now, this is much more the exception than the rule in the academic system. However, we argue this is the quickest way to get real feedback on the quality and impact of our research.},
keywords = {Entrepreneurship, Industry, Research Transfer},
pubstate = {published},
tppubtype = {conference}
}
We often complain about the challenges associated with a fruitful research-industry collaboration. The coronavirus pandemic has just aggravated them as, clearly, companies face difficult times and have mostly paused their R&I activities. In this context, we propose that researchers become entrepreneurs and play both roles at the same time. Right now, this is much more the exception than the rule in the academic system. However, we argue this is the quickest way to get real feedback on the quality and impact of our research. |
2020
|
ConferenceAbel Gómez, Markel Iglesias-Urkia, Aitor Urbieta, Jordi Cabot A model-based approach for developing event-driven architectures with AsyncAPI Proceedings of the 23rd ACM/IEEE International Conference on Model Driven Engineering Languages and Systems, Association for Computing Machinery, Virtual Event, Canada, 2020, ISBN: 9781450370196. Abstract | Links | BibTeX | Tags: AsyncAPI, Asynchronous Architechtures, Model-Driven Engineering (MDE) @conference{Gomez:MODELS:2020,
title = {A model-based approach for developing event-driven architectures with AsyncAPI},
author = {Abel G\'{o}mez and Markel Iglesias-Urkia and Aitor Urbieta and Jordi Cabot },
url = {https://abel.gomez.llana.me/wp-content/uploads/2020/10/gomez-models-2020.pdf},
doi = {10.1145/3365438.3410948},
isbn = {9781450370196},
year = {2020},
date = {2020-10-01},
booktitle = {Proceedings of the 23rd ACM/IEEE International Conference on Model Driven Engineering Languages and Systems},
pages = {121-131},
publisher = {Association for Computing Machinery},
address = {Virtual Event, Canada},
abstract = {In this Internet of Things (IoT) era, our everyday objects have evolved into the so-called cyber-physical systems (CPS). The use and deployment of CPS has especially penetrated the industry, giving rise to the Industry 4.0 or Industrial IoT (IIoT). Typically, architectures in IIoT environments are distributed and asynchronous, communication being guided by events such as the publication of (and corresponding subscription to) messages.While these architectures have some clear advantages (such as scalability and flexibility), they also raise interoperability challenges among the agents in the network. Indeed, the knowledge about the message content and its categorization (topics) gets diluted, leading to consistency problems, potential losses of information and complex processing requirements on the subscriber side to try to understand the received messages.In this paper, we present our proposal relying on AsyncAPI to automate the design and implementation of these architectures using model-based techniques for the generation of (part of) event-driven infrastructures. We have implemented our proposal as an open-source tool freely available online.},
keywords = {AsyncAPI, Asynchronous Architechtures, Model-Driven Engineering (MDE)},
pubstate = {published},
tppubtype = {conference}
}
In this Internet of Things (IoT) era, our everyday objects have evolved into the so-called cyber-physical systems (CPS). The use and deployment of CPS has especially penetrated the industry, giving rise to the Industry 4.0 or Industrial IoT (IIoT). Typically, architectures in IIoT environments are distributed and asynchronous, communication being guided by events such as the publication of (and corresponding subscription to) messages.While these architectures have some clear advantages (such as scalability and flexibility), they also raise interoperability challenges among the agents in the network. Indeed, the knowledge about the message content and its categorization (topics) gets diluted, leading to consistency problems, potential losses of information and complex processing requirements on the subscriber side to try to understand the received messages.In this paper, we present our proposal relying on AsyncAPI to automate the design and implementation of these architectures using model-based techniques for the generation of (part of) event-driven infrastructures. We have implemented our proposal as an open-source tool freely available online. Full Text AvailablePreprint |
Journal ArticleMarkel Iglesias-Urkia, Abel Gómez, Diego Casado-Mansilla, Aitor Urbieta Automatic generation of Web of Things servients using Thing Descriptions In: Personal and Ubiquitous Computing, 2020, ISSN: 1617-4917. Abstract | Links | BibTeX | Tags: Domain-Specific Languages (DSLs), Internet of Things (IoT), Model-Driven Engineering (MDE), Web of Things (WoT) @article{PAUC:Iglesias-Urkia:2020,
title = {Automatic generation of Web of Things servients using Thing Descriptions},
author = {Markel Iglesias-Urkia and Abel G\'{o}mez and Diego Casado-Mansilla and Aitor Urbieta},
url = {https://rdcu.be/b5GHq},
doi = {10.1007/s00779-020-01413-3},
issn = {1617-4917},
year = {2020},
date = {2020-07-18},
journal = {Personal and Ubiquitous Computing},
abstract = {Similarly to the standardization effort initiated for the World Wide Web in the 1990s, the World Wide Web Consortium is currently working on the Web of Things (WoT) specification. This initiative aims to tackle current fragmentation in the so-called Internet of Things by using existing Web standards. The ultimate goal is to cope with the increasing number of devices that are being connected to the Internet and to enable interoperability among them. On the other hand, Model-Driven Engineering (MDE) approaches make use of models to raise the abstraction level with the objective of accelerating the software development process, enabling design and code reuse, and increasing software quality.This work proposes to apply MDE techniques to enable the efficient development of WoT servients. Based on the WoT Thing Description specification, this work proposes both a textual-based concrete syntax and a model-based abstract syntax\textemdashboth fully compliant with the WoT specification\textemdashthat enable the generation of WoT servients in C++ with CoAP communication capabilities. This proposal is implemented by a tool that covers the whole development process, which is publicly available under an open source license.},
keywords = {Domain-Specific Languages (DSLs), Internet of Things (IoT), Model-Driven Engineering (MDE), Web of Things (WoT)},
pubstate = {published},
tppubtype = {article}
}
Similarly to the standardization effort initiated for the World Wide Web in the 1990s, the World Wide Web Consortium is currently working on the Web of Things (WoT) specification. This initiative aims to tackle current fragmentation in the so-called Internet of Things by using existing Web standards. The ultimate goal is to cope with the increasing number of devices that are being connected to the Internet and to enable interoperability among them. On the other hand, Model-Driven Engineering (MDE) approaches make use of models to raise the abstraction level with the objective of accelerating the software development process, enabling design and code reuse, and increasing software quality.This work proposes to apply MDE techniques to enable the efficient development of WoT servients. Based on the WoT Thing Description specification, this work proposes both a textual-based concrete syntax and a model-based abstract syntax—both fully compliant with the WoT specification—that enable the generation of WoT servients in C++ with CoAP communication capabilities. This proposal is implemented by a tool that covers the whole development process, which is publicly available under an open source license. Full Text Available |
Journal ArticleAbel Gómez, Xabier Mendialdua, Konstantinos Barmpis, Gábor Bergmann, Jordi Cabot, Xabier de Carlos, Csaba Debreceni, Antonio Garmendia, Dimitrios S. Kolovos, Juan de Lara Scalable modeling technologies in the wild: an experience report on wind turbines control applications development In: Software and Systems Modeling, vol. 19, no. 5, pp. 1229–1261, 2020, ISSN: 1619-1374. Abstract | Links | BibTeX | Tags: Experience Report, Model-Driven Engineering (MDE), MONDO, Wind Turbine (WT) @article{Gomez:SoSym:2020,
title = {Scalable modeling technologies in the wild: an experience report on wind turbines control applications development},
author = {Abel G\'{o}mez and Xabier Mendialdua and Konstantinos Barmpis and G\'{a}bor Bergmann and Jordi Cabot and Xabier de Carlos and Csaba Debreceni and Antonio Garmendia and Dimitrios S. Kolovos and Juan de Lara},
url = {https://rdcu.be/b0E0T},
doi = {10.1007/s10270-020-00776-8},
issn = {1619-1374},
year = {2020},
date = {2020-01-22},
journal = {Software and Systems Modeling},
volume = {19},
number = {5},
pages = {1229\textendash1261},
abstract = {Scalability in modeling has many facets, including the ability to build larger models and domain-specific languages (DSLs) efficiently. With the aim of tackling some of the most prominent scalability challenges in model-based engineering (MBE), the MONDO EU project developed the theoretical foundations and open-source implementation of a platform for scalable modeling and model management. The platform includes facilities for building large graphical DSLs, for splitting large models into sets of smaller interrelated fragments, to index large collections of models to speed-up their querying, and to enable the collaborative construction and refinement of complex models, among other features. This paper reports on the tools provided by MONDO that Ikerlan, a medium-sized technology center which in the last decade has embraced the MBE paradigm, adopted in order to improve their processes. This experience produced as a result a set of model editors and related technologies that fostered collaboration and scalability in the development of wind turbine control applications. In order to evaluate the benefits obtained, an on-site evaluation of the tools was performed. This evaluation shows that scalable MBE technologies give new growth opportunities to small- and medium-sized organizations.},
keywords = {Experience Report, Model-Driven Engineering (MDE), MONDO, Wind Turbine (WT)},
pubstate = {published},
tppubtype = {article}
}
Scalability in modeling has many facets, including the ability to build larger models and domain-specific languages (DSLs) efficiently. With the aim of tackling some of the most prominent scalability challenges in model-based engineering (MBE), the MONDO EU project developed the theoretical foundations and open-source implementation of a platform for scalable modeling and model management. The platform includes facilities for building large graphical DSLs, for splitting large models into sets of smaller interrelated fragments, to index large collections of models to speed-up their querying, and to enable the collaborative construction and refinement of complex models, among other features. This paper reports on the tools provided by MONDO that Ikerlan, a medium-sized technology center which in the last decade has embraced the MBE paradigm, adopted in order to improve their processes. This experience produced as a result a set of model editors and related technologies that fostered collaboration and scalability in the development of wind turbine control applications. In order to evaluate the benefits obtained, an on-site evaluation of the tools was performed. This evaluation shows that scalable MBE technologies give new growth opportunities to small- and medium-sized organizations. Full Text Available |
Journal ArticleAlexandra Mazak, Sabine Wolny, Abel Gómez, Jordi Cabot, Manuel Wimmer, Gerti Kappel Temporal Models on Time Series Databases In: Journal of Object Technology, vol. 19, no. 3, pp. 3:1-15, 2020, ISSN: 1660-1769, (Special Issue dedicated to Martin Gogolla on his 65th Birthday). Links | BibTeX | Tags: Model-Driven Engineering (MDE), Temporal Models @article{Mazak:JOT:2020,
title = {Temporal Models on Time Series Databases},
author = {Alexandra Mazak and Sabine Wolny and Abel G\'{o}mez and Jordi Cabot and Manuel Wimmer and Gerti Kappel},
editor = {Lars Hamann and Richard Paige and Alfonso Pierantonio and Bernhard Rumpe and Antonio Vallecillo},
doi = {10.5381/jot.2020.19.3.a14},
issn = {1660-1769},
year = {2020},
date = {2020-01-01},
journal = {Journal of Object Technology},
volume = {19},
number = {3},
pages = {3:1-15},
note = {Special Issue dedicated to Martin Gogolla on his 65th Birthday},
keywords = {Model-Driven Engineering (MDE), Temporal Models},
pubstate = {published},
tppubtype = {article}
}
Full Text AvailableOpen Access |
2019
|
ConferenceMarkel Iglesias-Urkia, Abel Gómez, Diego Casado-Mansilla, Aitor Urbieta Enabling easy Web of Things compatible device generation using a Model-Driven Engineering approach Proceedings of the 9th International Conference on the Internet of Things, IoT 2019 ACM, New York, 2019, ISBN: 978-1-4503-7207-7. Abstract | Links | BibTeX | Tags: Code Generation, Domain-Specific Languages (DSLs), Internet of Things (IoT), Model-Driven Engineering (MDE), Web of Things (WoT) @conference{Iglesias-Urkia:IoT:2019,
title = {Enabling easy Web of Things compatible device generation using a Model-Driven Engineering approach},
author = {Markel Iglesias-Urkia and Abel G\'{o}mez and Diego Casado-Mansilla and Aitor Urbieta},
doi = {10.1145/3365871.3365898},
isbn = {978-1-4503-7207-7},
year = {2019},
date = {2019-10-22},
booktitle = {Proceedings of the 9th International Conference on the Internet of Things},
pages = {25:1--25:8},
publisher = {ACM},
address = {New York},
series = {IoT 2019},
abstract = {One of the main ongoing standardization efforts of the Internet of Things (IoT) at the application layer is the Web of Things (WoT), which aims to enable interoperability using already existing standards. However, keeping up the design and implementation of IoT applications with the exponentially increasing number of devices being interconnected is costly in workforce resources. Model-Driven Engineering (MDE) approaches increase the level of abstraction using models, and allowing to reuse design and code. This lowers the use of resources for implementing solutions seamlessly. This is why in this work we implement a MDE approach based on the WoT, allowing easy WoT-based device generation. Besides, automated code generation is applied to reduce manual tasks even further. Using the Eclipse Modelling Framework (EMF) and its associated plugins, we provide a way of describing models graphically and generate the code automatically, reducing development and testing time.},
keywords = {Code Generation, Domain-Specific Languages (DSLs), Internet of Things (IoT), Model-Driven Engineering (MDE), Web of Things (WoT)},
pubstate = {published},
tppubtype = {conference}
}
One of the main ongoing standardization efforts of the Internet of Things (IoT) at the application layer is the Web of Things (WoT), which aims to enable interoperability using already existing standards. However, keeping up the design and implementation of IoT applications with the exponentially increasing number of devices being interconnected is costly in workforce resources. Model-Driven Engineering (MDE) approaches increase the level of abstraction using models, and allowing to reuse design and code. This lowers the use of resources for implementing solutions seamlessly. This is why in this work we implement a MDE approach based on the WoT, allowing easy WoT-based device generation. Besides, automated code generation is applied to reduce manual tasks even further. Using the Eclipse Modelling Framework (EMF) and its associated plugins, we provide a way of describing models graphically and generate the code automatically, reducing development and testing time. |
ConferenceAndrey Sadovykh, Dragos Truscan, Wasif Afzal, Hugo Bruneliere, Adnan Ashraf, Abel Gómez, Alexandra Espinosa, Gunnar Widforss, Pierluigi Pierini, Elizabeta Fourneret, Alessandra Bagnato MegaM@Rt2 Project: Mega-Modelling at Runtime - Intermediate Results and Research Challenges Software Technology: Methods and Tools. TOOLS 2019, vol. 11771, Lecture Notes in Computer Science Springer International Publishing, Cham, 2019, ISBN: 978-3-030-29852-4. Abstract | Links | BibTeX | Tags: Cyber-Physical Systems (CPS), MegaM@Rt2, Megamodelling, Model-Driven Engineering (MDE), Runtime, Traceability @conference{Sadovykh:TOOLS:2019,
title = {MegaM@Rt2 Project: Mega-Modelling at Runtime - Intermediate Results and Research Challenges},
author = {Andrey Sadovykh and Dragos Truscan and Wasif Afzal and Hugo Bruneliere and Adnan Ashraf and Abel G\'{o}mez and Alexandra Espinosa and Gunnar Widforss and Pierluigi Pierini and Elizabeta Fourneret and Alessandra Bagnato},
editor = {Manuel Mazzara and Jean-Michel Bruel and Bertrand Meyer and Alexander Petrenko},
doi = {10.1007/978-3-030-29852-4_33},
isbn = {978-3-030-29852-4},
year = {2019},
date = {2019-10-08},
booktitle = {Software Technology: Methods and Tools. TOOLS 2019},
volume = {11771},
pages = {393--405},
publisher = {Springer International Publishing},
address = {Cham},
series = {Lecture Notes in Computer Science},
abstract = {MegaM@Rt2 Project is a major European effort towards the model-driven engineering of complex Cyber-Physical systems combined with runtime analysis. Both areas are dealt within the same methodology to enjoy the mutual benefits through sharing and tracking various engineering artifacts. The project involves 27 partners that contribute with diverse research and industrial practices addressing real-life case study challenges stemming from 9 application domains. These partners jointly progress towards a common framework to support those application domains with model-driven engineering, verification, and runtime analysis methods. In this paper, we present the motivation for the project, the current approach and the intermediate results in terms of tools, research work and practical evaluation on use cases from the project. We also discuss outstanding challenges and proposed approaches to address them.},
keywords = {Cyber-Physical Systems (CPS), MegaM@Rt2, Megamodelling, Model-Driven Engineering (MDE), Runtime, Traceability},
pubstate = {published},
tppubtype = {conference}
}
MegaM@Rt2 Project is a major European effort towards the model-driven engineering of complex Cyber-Physical systems combined with runtime analysis. Both areas are dealt within the same methodology to enjoy the mutual benefits through sharing and tracking various engineering artifacts. The project involves 27 partners that contribute with diverse research and industrial practices addressing real-life case study challenges stemming from 9 application domains. These partners jointly progress towards a common framework to support those application domains with model-driven engineering, verification, and runtime analysis methods. In this paper, we present the motivation for the project, the current approach and the intermediate results in terms of tools, research work and practical evaluation on use cases from the project. We also discuss outstanding challenges and proposed approaches to address them. |
ConferenceAbel Gómez, Iker Fernandez de Larrea, Markel Iglesias-Urkia, Beatriz Lopez-Davalillo, Aitor Urbieta, Jordi Cabot Una Aproximación Basada en Modelos para la Definición de Arquitecturas Asíncronas Actas de las XXIV Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2019), Sistedes, 2019. Abstract | Links | BibTeX | Tags: AsyncAPI, Asynchronous Architechtures, Cyber-Physical Systems (CPS), Publish-Subscribe @conference{Gomez:JISBD:2019b,
title = {Una Aproximaci\'{o}n Basada en Modelos para la Definici\'{o}n de Arquitecturas As\'{i}ncronas},
author = {Abel G\'{o}mez and Iker Fernandez de Larrea and Markel Iglesias-Urkia and Beatriz Lopez-Davalillo and Aitor Urbieta and Jordi Cabot},
editor = {Jennifer P\'{e}rez},
url = {http://hdl.handle.net/11705/JISBD/2019/035},
year = {2019},
date = {2019-09-02},
booktitle = {Actas de las XXIV Jornadas de Ingenier\'{i}a del Software y Bases de Datos (JISBD 2019)},
publisher = {Sistedes},
abstract = {En la nueva era del Internet de las cosas (IoT), nuestros objetos cotidianos se han convertido en los llamados sistemas ciberf\'{i}sicos (CPS). El uso y despliegue de los CPS ha calado especialmente en la industria, dando lugar a la llamada Industria 4.0 o IoT Industrial (IIoT). T\'{i}picamente, las arquitecturas IIoT son distribuidas y as\'{i}ncronas, estando la comunicaci\'{o}n guiada por eventos como por ejemplo la publicaci\'{o}n (y correspondiente suscripci\'{o}n) a mensajes. No obstante, las mejoras en escalabilidad y tolerancia al cambio de estas arquitecturas tienen sus desventajas, y es f\'{a}cil que el conocimiento sobre los mensajes y su categorizaci\'{o}n (topics) se diluya entre los elementos de la arquitectura, dando lugar a problemas de interoperabilidad entre los agentes implicados. En este art\'{i}culo, presentamos nuestra propuesta para automatizar el dise\~{n}o e implementaci\'{o}n de estas arquitecturas mediante t\'{e}cnicas basadas en modelos. Para ello nos apoyamos en AsyncAPI, una propuesta para la especificaci\'{o}n de API dirigidas por mensajes.},
keywords = {AsyncAPI, Asynchronous Architechtures, Cyber-Physical Systems (CPS), Publish-Subscribe},
pubstate = {published},
tppubtype = {conference}
}
En la nueva era del Internet de las cosas (IoT), nuestros objetos cotidianos se han convertido en los llamados sistemas ciberfísicos (CPS). El uso y despliegue de los CPS ha calado especialmente en la industria, dando lugar a la llamada Industria 4.0 o IoT Industrial (IIoT). Típicamente, las arquitecturas IIoT son distribuidas y asíncronas, estando la comunicación guiada por eventos como por ejemplo la publicación (y correspondiente suscripción) a mensajes. No obstante, las mejoras en escalabilidad y tolerancia al cambio de estas arquitecturas tienen sus desventajas, y es fácil que el conocimiento sobre los mensajes y su categorización (topics) se diluya entre los elementos de la arquitectura, dando lugar a problemas de interoperabilidad entre los agentes implicados. En este artículo, presentamos nuestra propuesta para automatizar el diseño e implementación de estas arquitecturas mediante técnicas basadas en modelos. Para ello nos apoyamos en AsyncAPI, una propuesta para la especificación de API dirigidas por mensajes. Open AccessSpanish |
ConferenceSimona Bernardi, Juan L. Domínguez, Abel Gómez, Christophe Joubert, José Merseguer, Diego Perez-Palacin, José I. Requeno, Alberto Romeu A Systematic Approach for Performance Assessment Using Process Mining: An Industrial Experience Report (Abstract) Actas de las XXIV Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2019), Cáceres, septiembre de 2019., Sistedes Sistedes, 2019. Abstract | Links | BibTeX | Tags: Complex Event Processing (CEP), Petri net (PN), Process Mining, Software Perfomance, Unified Modeling Language (UML) @conference{Bernardi:JISBD:2019,
title = {A Systematic Approach for Performance Assessment Using Process Mining: An Industrial Experience Report (Abstract)},
author = {Simona Bernardi and Juan L. Dom\'{i}nguez and Abel G\'{o}mez and Christophe Joubert and Jos\'{e} Merseguer and Diego Perez-Palacin and Jos\'{e} I. Requeno and Alberto Romeu},
editor = {Jennifer P\'{e}rez },
url = {http://hdl.handle.net/11705/JISBD/2019/019},
year = {2019},
date = {2019-09-02},
booktitle = {Actas de las XXIV Jornadas de Ingenier\'{i}a del Software y Bases de Datos (JISBD 2019), C\'{a}ceres, septiembre de 2019.},
publisher = {Sistedes},
organization = {Sistedes},
abstract = {Software performance engineering is a mature field that offers methods to assess system performance. Process mining is a promising research field applied to gain insight on system processes. The interplay of these two fields opens promising applications in the industry. In this work, we report our experience applying a methodology, based on process mining techniques, for the performance assessment of a commercial data-intensive software application. The methodology has successfully assessed the scalability of future versions of this system. Moreover, it has identified bottlenecks components and replication needs for fulfilling business rules. The system, an integrated port operations management system, has been developed by Prodevelop, a medium-sized software enterprise with high expertise in geospatial technologies. The performance assessment has been carried out by a team composed by practitioners and researchers. Finally, the paper offers a deep discussion on the lessons learned during the experience, that will be useful for practitioners to adopt the methodology and for researcher to find new routes.},
keywords = {Complex Event Processing (CEP), Petri net (PN), Process Mining, Software Perfomance, Unified Modeling Language (UML)},
pubstate = {published},
tppubtype = {conference}
}
Software performance engineering is a mature field that offers methods to assess system performance. Process mining is a promising research field applied to gain insight on system processes. The interplay of these two fields opens promising applications in the industry. In this work, we report our experience applying a methodology, based on process mining techniques, for the performance assessment of a commercial data-intensive software application. The methodology has successfully assessed the scalability of future versions of this system. Moreover, it has identified bottlenecks components and replication needs for fulfilling business rules. The system, an integrated port operations management system, has been developed by Prodevelop, a medium-sized software enterprise with high expertise in geospatial technologies. The performance assessment has been carried out by a team composed by practitioners and researchers. Finally, the paper offers a deep discussion on the lessons learned during the experience, that will be useful for practitioners to adopt the methodology and for researcher to find new routes. AbstractOpen Access |
Journal ArticleAndrey Sadovykh, Wasif Afzal, Dragos Truscan, Pierluigi Pierini, Hugo Bruneliere, Alessandra Bagnato, Abel Gómez, Jordi Cabot, Orlando Avila-García On a Tool-Supported Model-Based Approach for Building Architectures and Roadmaps: The MegaM@Rt2 Project Experience In: Microprocessors and Microsystems, vol. 71, pp. 102848, 2019, ISSN: 0141-9331. Abstract | Links | BibTeX | Tags: Document Generation, MegaM@Rt2, Model-Driven Engineering (MDE), Modelio, Requirements Engineering (RE), SysML @article{Sadovykh:MICPRO:2019,
title = {On a Tool-Supported Model-Based Approach for Building Architectures and Roadmaps: The MegaM@Rt2 Project Experience},
author = {Andrey Sadovykh and Wasif Afzal and Dragos Truscan and Pierluigi Pierini and Hugo Bruneliere and Alessandra Bagnato and Abel G\'{o}mez and Jordi Cabot and Orlando Avila-Garc\'{i}a},
doi = {10.1016/j.micpro.2019.102848},
issn = {0141-9331},
year = {2019},
date = {2019-07-22},
journal = {Microprocessors and Microsystems},
volume = {71},
pages = {102848},
abstract = {MegaM@Rt2 is a large European project dedicated to the provisioning of a model-based methodology and supporting tooling for system engineering at a wide scale. It notably targets the continuous development and runtime validation of such complex systems by developing a framework addressing a large set of engineering processes and application domains. This collaborative project involves 27 partners from 6 different countries, 9 industrial case studies as well as over 30 different software tools from project partners (and others). In the context of the MegaM@Rt2 project, we elaborated on a pragmatic model-driven approach to specify the case study requirements, design the high-level architecture of a framework, perform the gap analysis between the industrial needs and current state-of-the-art, and plan a first framework development roadmap accordingly. The present paper describes the generic tool-supported approach that came out as a result. It also details its concrete application in the MegaM@Rt2 project. In particular, we discuss the collaborative modeling process, the requirement definition tooling, the approach for components modeling, as well as the traceability and document generation. In addition, we show how we used the proposed solution to specify the MegaM@Rt2 framework’s conceptual tool components centered around three complementary tool sets: the MegaM@Rt2 System Engineering Tool Set, the MegaM@Rt2 Runtime Analysis Tool Set and the MegaM@Rt2 Model \& Traceability Management Tool Set. The paper ends with a discussion on the practical lessons we have learned from this work so far.},
keywords = {Document Generation, MegaM@Rt2, Model-Driven Engineering (MDE), Modelio, Requirements Engineering (RE), SysML},
pubstate = {published},
tppubtype = {article}
}
MegaM@Rt2 is a large European project dedicated to the provisioning of a model-based methodology and supporting tooling for system engineering at a wide scale. It notably targets the continuous development and runtime validation of such complex systems by developing a framework addressing a large set of engineering processes and application domains. This collaborative project involves 27 partners from 6 different countries, 9 industrial case studies as well as over 30 different software tools from project partners (and others). In the context of the MegaM@Rt2 project, we elaborated on a pragmatic model-driven approach to specify the case study requirements, design the high-level architecture of a framework, perform the gap analysis between the industrial needs and current state-of-the-art, and plan a first framework development roadmap accordingly. The present paper describes the generic tool-supported approach that came out as a result. It also details its concrete application in the MegaM@Rt2 project. In particular, we discuss the collaborative modeling process, the requirement definition tooling, the approach for components modeling, as well as the traceability and document generation. In addition, we show how we used the proposed solution to specify the MegaM@Rt2 framework’s conceptual tool components centered around three complementary tool sets: the MegaM@Rt2 System Engineering Tool Set, the MegaM@Rt2 Runtime Analysis Tool Set and the MegaM@Rt2 Model & Traceability Management Tool Set. The paper ends with a discussion on the practical lessons we have learned from this work so far. |
ConferenceAbel Gómez, Jordi Cabot, Manuel Wimmer TemporalEMF: A Temporal Metamodeling Framework - Extended Abstract Actes du XXXVIIème Congrès INFORSID, Paris, France, June 11-14, 2019, INFORSID INFORSID, Paris, France, 2019. Links | BibTeX | Tags: Extended Abstract, Temporal Models, TemporalEMF @conference{Gomez:INFORSID:2019,
title = {TemporalEMF: A Temporal Metamodeling Framework - Extended Abstract},
author = {Abel G\'{o}mez and Jordi Cabot and Manuel Wimmer },
url = {http://inforsid.fr/actes/2019/INFORSID_2019_p305-307.pdf},
year = {2019},
date = {2019-06-11},
booktitle = {Actes du XXXVII\`{e}me Congr\`{e}s INFORSID, Paris, France, June 11-14, 2019},
pages = {305--307},
publisher = {INFORSID},
address = {Paris, France},
organization = {INFORSID},
keywords = {Extended Abstract, Temporal Models, TemporalEMF},
pubstate = {published},
tppubtype = {conference}
}
AbstractOpen Access |
ConferenceGwendal Daniel, Abel Gómez, Jordi Cabot UMLto[No]SQL: Mapping Conceptual Schemas to Heterogeneous Datastores 2019 13th International Conference on Research Challenges in Information Science (RCIS), IEEE, 2019, ISBN: 978-1-7281-4844-1. Abstract | Links | BibTeX | Tags: Model Partitioning, Model Persistence, Model-Driven Engineering (MDE), NoSQL, RDBMS, Unified Modeling Language (UML) @conference{Daniel:RCIS:2019,
title = {UMLto[No]SQL: Mapping Conceptual Schemas to Heterogeneous Datastores},
author = {Gwendal Daniel and Abel G\'{o}mez and Jordi Cabot},
editor = {Manuel Kolp and Jean Vanderdonckt and Monique Snoeck and Yves Wautelet},
doi = {10.1109/RCIS.2019.8877094},
isbn = {978-1-7281-4844-1},
year = {2019},
date = {2019-05-29},
booktitle = {2019 13th International Conference on Research Challenges in Information Science (RCIS)},
pages = {215--227},
publisher = {IEEE},
abstract = {The growing need to store and manipulate large volumes of data has led to the blossoming of various families of data storage solutions. Software modelers can benefit from this growing diversity to improve critical parts of their applications, using a combination of different databases to store the data based on access, availability, and performance requirements. However, while the mapping of conceptual schemas to relational databases is a well-studied field of research, there are few works that target the role of conceptual modeling in a multiple and diverse data storage settings. This is particularly true when dealing with the mapping of constraints in the conceptual schema. In this paper we present the UMLto[No]SQL approach that maps conceptual schemas expressed in UML/OCL into a set of logical schemas (either relational or NoSQL ones) to be used to store the application data according to the data partition envisaged by the designer. Our mapping covers as well the database queries required to implement and check the model’s constraints. UMLto[No]SQL takes care of integrating the different data storages, and provides a modeling layer that enables a transparent manipulation of the data using conceptual level information.},
keywords = {Model Partitioning, Model Persistence, Model-Driven Engineering (MDE), NoSQL, RDBMS, Unified Modeling Language (UML)},
pubstate = {published},
tppubtype = {conference}
}
The growing need to store and manipulate large volumes of data has led to the blossoming of various families of data storage solutions. Software modelers can benefit from this growing diversity to improve critical parts of their applications, using a combination of different databases to store the data based on access, availability, and performance requirements. However, while the mapping of conceptual schemas to relational databases is a well-studied field of research, there are few works that target the role of conceptual modeling in a multiple and diverse data storage settings. This is particularly true when dealing with the mapping of constraints in the conceptual schema. In this paper we present the UMLto[No]SQL approach that maps conceptual schemas expressed in UML/OCL into a set of logical schemas (either relational or NoSQL ones) to be used to store the application data according to the data partition envisaged by the designer. Our mapping covers as well the database queries required to implement and check the model’s constraints. UMLto[No]SQL takes care of integrating the different data storages, and provides a modeling layer that enables a transparent manipulation of the data using conceptual level information. |
ConferenceAndrey Sadovykh, Alessandra Bagnato, Dragos Truscan, Pierluigi Pierini, Hugo Bruneliere, Abel Gómez, Jordi Cabot, Orlando Avila-García, Wasif Afzal A Tool-Supported Approach for Building the Architecture and Roadmap in MegaM@Rt2 Project Proceedings of 6th International Conference in Software Engineering for Defence Applications, vol. 925, Advances in Intelligent Systems and Computing Springer International Publishing, Cham, 2019, ISBN: 978-3-030-14687-0. Abstract | Links | BibTeX | Tags: Document Generation, MegaM@Rt2, Model-Driven Engineering (MDE), Modelio, Requirements Engineering (RE) @conference{Sadovykh:SEDA:2018,
title = {A Tool-Supported Approach for Building the Architecture and Roadmap in MegaM@Rt2 Project},
author = {Andrey Sadovykh and Alessandra Bagnato and Dragos Truscan and Pierluigi Pierini and Hugo Bruneliere and Abel G\'{o}mez and Jordi Cabot and Orlando Avila-Garc\'{i}a and Wasif Afzal},
editor = {Paolo Ciancarini and Manuel Mazzara and Angelo Messina and Alberto Sillitti and Giancarlo Succi},
doi = {10.1007/978-3-030-14687-0_24},
isbn = {978-3-030-14687-0},
year = {2019},
date = {2019-03-19},
booktitle = {Proceedings of 6th International Conference in Software Engineering for Defence Applications},
volume = {925},
pages = {265--274},
publisher = {Springer International Publishing},
address = {Cham},
series = {Advances in Intelligent Systems and Computing},
abstract = {MegaM@Rt2 is a large European project dedicated to the provisioning of a model-based methodology and supporting tooling for system engineering at a wide scale. It notably targets the continuous development and runtime validation of such complex systems by developing the MegaM@Rt2 framework to address a large set of engineering processes and application domains. This collaborative project involves 27 partners from 6 different countries, 9 industrial case studies as well as over 30 different tools from project partners (and others). In the context of the project, we opted for a pragmatic model-driven approach in order to specify the case study requirements, design the high-level architecture of the MegaM@Rt2 framework, perform the gap analysis between the industrial needs and current state-of-the-art, and to plan a first framework development roadmap accordingly. The present paper concentrates on the concrete examples of the tooling approach for building the framework architecture. In particular, we discuss the collaborative modeling, requirements definition tooling, approach for components modeling, traceability and document generation. The paper also provides a brief discussion of the practical lessons we have learned from it so far.},
keywords = {Document Generation, MegaM@Rt2, Model-Driven Engineering (MDE), Modelio, Requirements Engineering (RE)},
pubstate = {published},
tppubtype = {conference}
}
MegaM@Rt2 is a large European project dedicated to the provisioning of a model-based methodology and supporting tooling for system engineering at a wide scale. It notably targets the continuous development and runtime validation of such complex systems by developing the MegaM@Rt2 framework to address a large set of engineering processes and application domains. This collaborative project involves 27 partners from 6 different countries, 9 industrial case studies as well as over 30 different tools from project partners (and others). In the context of the project, we opted for a pragmatic model-driven approach in order to specify the case study requirements, design the high-level architecture of the MegaM@Rt2 framework, perform the gap analysis between the industrial needs and current state-of-the-art, and to plan a first framework development roadmap accordingly. The present paper concentrates on the concrete examples of the tooling approach for building the framework architecture. In particular, we discuss the collaborative modeling, requirements definition tooling, approach for components modeling, traceability and document generation. The paper also provides a brief discussion of the practical lessons we have learned from it so far. |
Journal ArticleAbel Gómez, Ricardo J. Rodríguez, María-Emilia Cambronero, Valentín Valero Profiling the publish/subscribe paradigm for automated analysis using colored Petri nets In: Software & Systems Modeling, vol. 18, no. 5, pp. 2973-3003, 2019, ISSN: 1619-1374. Abstract | Links | BibTeX | Tags: CPN Tools, Model Transformation (MT), Model-Driven Engineering (MDE), Petri net (PN), Publish-Subscribe, Unified Modeling Language (UML) @article{G\'{o}mez2019b,
title = {Profiling the publish/subscribe paradigm for automated analysis using colored Petri nets},
author = {Abel G\'{o}mez and Ricardo J. Rodr\'{i}guez and Mar\'{i}a-Emilia Cambronero and Valent\'{i}n Valero},
doi = {10.1007/s10270-019-00716-1},
issn = {1619-1374},
year = {2019},
date = {2019-01-22},
journal = {Software \& Systems Modeling},
volume = {18},
number = {5},
pages = {2973-3003},
abstract = {UML sequence diagrams are used to graphically describe the message interactions between the objects participating in a certain scenario. Combined fragments extend the basic functionality of UML sequence diagrams with control structures, such as sequences, alternatives, iterations, or parallels. In this paper, we present a UML profile to annotate sequence diagrams with combined fragments to model timed Web services with distributed resources under the publish/subscribe paradigm. This profile is exploited to automatically obtain a representation of the system based on Colored Petri nets using a novel model-to-model (M2M) transformation. This M2M transformation has been specified using QVT and has been integrated in a new add-on extending a state-of-the-art UML modeling tool. Generated Petri nets can be immediately used in well-known Petri net software, such as CPN Tools, to analyze the system behavior. Hence, our model-to-model transformation tool allows for simulating the system and finding design errors in early stages of system development, which enables us to fix them at these early phases and thus potentially saving development costs.},
keywords = {CPN Tools, Model Transformation (MT), Model-Driven Engineering (MDE), Petri net (PN), Publish-Subscribe, Unified Modeling Language (UML)},
pubstate = {published},
tppubtype = {article}
}
UML sequence diagrams are used to graphically describe the message interactions between the objects participating in a certain scenario. Combined fragments extend the basic functionality of UML sequence diagrams with control structures, such as sequences, alternatives, iterations, or parallels. In this paper, we present a UML profile to annotate sequence diagrams with combined fragments to model timed Web services with distributed resources under the publish/subscribe paradigm. This profile is exploited to automatically obtain a representation of the system based on Colored Petri nets using a novel model-to-model (M2M) transformation. This M2M transformation has been specified using QVT and has been integrated in a new add-on extending a state-of-the-art UML modeling tool. Generated Petri nets can be immediately used in well-known Petri net software, such as CPN Tools, to analyze the system behavior. Hence, our model-to-model transformation tool allows for simulating the system and finding design errors in early stages of system development, which enables us to fix them at these early phases and thus potentially saving development costs. Full Text AvailableOpen Access |
2018
|
ConferenceHugo Bruneliere, Romina Eramo, Abel Gómez, Valentin Besnard, Jean Michel Bruel, Martin Gogolla, Andreas Kästner, Adrian Rutle Model-Driven Engineering for Design-Runtime Interaction in Complex Systems: Scientific Challenges and Roadmap Software Technologies: Applications and Foundations, vol. 11176, Springer International Publishing, Cham, 2018, ISBN: 978-3-030-04771-9. Abstract | Links | BibTeX | Tags: Design Time, Model-Driven Engineering (MDE), Runtime, Traceability @conference{Bruneliere:MDEDeRun:2018,
title = {Model-Driven Engineering for Design-Runtime Interaction in Complex Systems: Scientific Challenges and Roadmap},
author = {Hugo Bruneliere and Romina Eramo and Abel G\'{o}mez and Valentin Besnard and Jean Michel Bruel and Martin Gogolla and Andreas K\"{a}stner and Adrian Rutle},
editor = {Manuel Mazzara and Iulian Ober and Gwen Sala\"{u}n},
doi = {10.1007/978-3-030-04771-9_40},
isbn = {978-3-030-04771-9},
year = {2018},
date = {2018-12-06},
booktitle = {Software Technologies: Applications and Foundations},
volume = {11176},
pages = {536--543},
publisher = {Springer International Publishing},
address = {Cham},
abstract = {This paper reports on the first Workshop on Model-Driven Engineering for Design-Runtime Interaction in Complex Systems (also called MDE@DeRun 2018) that took place during the STAF 2018 week. It explains the main objectives, content and results of the event. Based on these, the paper also proposes initial directions to explore for further research in the workshop area.},
keywords = {Design Time, Model-Driven Engineering (MDE), Runtime, Traceability},
pubstate = {published},
tppubtype = {conference}
}
This paper reports on the first Workshop on Model-Driven Engineering for Design-Runtime Interaction in Complex Systems (also called MDE@DeRun 2018) that took place during the STAF 2018 week. It explains the main objectives, content and results of the event. Based on these, the paper also proposes initial directions to explore for further research in the workshop area. |
ConferenceAbel Gómez, Jordi Cabot, Manuel Wimmer TemporalEMF: A Temporal Metamodeling Framework Conceptual Modeling, vol. 11157, Springer International Publishing, Cham, 2018, ISBN: 978-3-030-00847-5. Abstract | Links | BibTeX | Tags: Model Persistence, Model-Driven Engineering (MDE), Temporal Models @conference{Gomez:RE:2018,
title = {TemporalEMF: A Temporal Metamodeling Framework},
author = {Abel G\'{o}mez and Jordi Cabot and Manuel Wimmer},
editor = {Juan C. Trujillo and Karen C. Davis and Xiaoyong Du and Zhanhuai Li and Tok Wang Ling and Guoliang Li and Li Lee Mong},
doi = {10.1007/978-3-030-00847-5_26},
isbn = {978-3-030-00847-5},
year = {2018},
date = {2018-09-26},
booktitle = {Conceptual Modeling},
volume = {11157},
pages = {365--381},
publisher = {Springer International Publishing},
address = {Cham},
abstract = {Existing modeling tools provide direct access to the most current version of a model but very limited support to inspect the model state in the past. This typically requires looking for a model version (usually stored in some kind of external versioning system like Git) roughly corresponding to the desired period and using it to manually retrieve the required data. This approximate answer is not enough in scenarios that require a more precise and immediate response to temporal queries like complex collaborative co-engineering processes or runtime models.
In this paper, we reuse well-known concepts from temporal languages to propose a temporal metamodeling framework, called TemporalEMF, that adds native temporal support for models. In our framework, models are automatically treated as temporal models and can be subjected to temporal queries to retrieve the model contents at different points in time. We have built our framework on top of the Eclipse Modeling Framework (EMF). Behind the scenes, the history of a model is transparently stored in a NoSQL database. We evaluate the resulting TemporalEMF framework with an Industry 4.0 case study about a production system simulator. The results show good scalability for storing and accessing temporal models without requiring changes to the syntax and semantics of the simulator.},
keywords = {Model Persistence, Model-Driven Engineering (MDE), Temporal Models},
pubstate = {published},
tppubtype = {conference}
}
Existing modeling tools provide direct access to the most current version of a model but very limited support to inspect the model state in the past. This typically requires looking for a model version (usually stored in some kind of external versioning system like Git) roughly corresponding to the desired period and using it to manually retrieve the required data. This approximate answer is not enough in scenarios that require a more precise and immediate response to temporal queries like complex collaborative co-engineering processes or runtime models.
In this paper, we reuse well-known concepts from temporal languages to propose a temporal metamodeling framework, called TemporalEMF, that adds native temporal support for models. In our framework, models are automatically treated as temporal models and can be subjected to temporal queries to retrieve the model contents at different points in time. We have built our framework on top of the Eclipse Modeling Framework (EMF). Behind the scenes, the history of a model is transparently stored in a NoSQL database. We evaluate the resulting TemporalEMF framework with an Industry 4.0 case study about a production system simulator. The results show good scalability for storing and accessing temporal models without requiring changes to the syntax and semantics of the simulator. |
ConferenceAbel Gómez, Connie U. Smith, Amy Spellmann, Jordi Cabot Enabling Performance Modeling for the Masses: Initial Experiences System Analysis and Modeling. Languages, Methods, and Tools for Systems Engineering, vol. 11150, Lecture Notes in Computer Science Springer International Publishing, Cham, 2018, ISBN: 978-3-030-01042-3. Abstract | Links | BibTeX | Tags: Model-Driven Engineering (MDE), Query/View/Transformation (QVT), S-PMIF+, software performance @conference{Gomez:SAM:2018,
title = {Enabling Performance Modeling for the Masses: Initial Experiences},
author = {Abel G\'{o}mez and Connie U. Smith and Amy Spellmann and Jordi Cabot},
editor = {Ferhat Khendek and Reinhard Gotzhein},
doi = {10.1007/978-3-030-01042-3_7},
isbn = {978-3-030-01042-3},
year = {2018},
date = {2018-09-26},
booktitle = {System Analysis and Modeling. Languages, Methods, and Tools for Systems Engineering},
volume = {11150},
pages = {105--126},
publisher = {Springer International Publishing},
address = {Cham},
series = {Lecture Notes in Computer Science},
abstract = {Performance problems such as sluggish response time or low throughput are especially annoying, frustrating and noticeable to users. Fixing performance problems after they occur results in unplanned expenses and time. Our vision is an MDE-intensive software development paradigm for complex systems in which software designers can evaluate performance early in development, when the analysis can have the greatest impact. We seek to empower designers to do the analysis themselves by automating the creation of performance models out of standard design models. Such performance models can be automatically solved, providing results meaningful to them. In our vision, this automation can be enabled by using model-to-model transformations: First, designers create UML design models embellished with the Modeling and Analysis of Real Time and Embedded systems (MARTE) design specifications; and secondly, such models are transformed to automatically solvable performance models by using QVT. This paper reports on our first experiences when implementing these two initial activities.},
keywords = {Model-Driven Engineering (MDE), Query/View/Transformation (QVT), S-PMIF+, software performance},
pubstate = {published},
tppubtype = {conference}
}
Performance problems such as sluggish response time or low throughput are especially annoying, frustrating and noticeable to users. Fixing performance problems after they occur results in unplanned expenses and time. Our vision is an MDE-intensive software development paradigm for complex systems in which software designers can evaluate performance early in development, when the analysis can have the greatest impact. We seek to empower designers to do the analysis themselves by automating the creation of performance models out of standard design models. Such performance models can be automatically solved, providing results meaningful to them. In our vision, this automation can be enabled by using model-to-model transformations: First, designers create UML design models embellished with the Modeling and Analysis of Real Time and Embedded systems (MARTE) design specifications; and secondly, such models are transformed to automatically solvable performance models by using QVT. This paper reports on our first experiences when implementing these two initial activities. |
ConferenceAbel Gómez, Orlando Avila-García, Jordi Cabot, José Ramón Juárez, Aitor Urbieta, Eugenio Villar The MegaM@Rt2 ECSEL Project: MegaModelling at Runtime — Scalable Model-based Framework for Continuous Development and Runtime Validation of Complex Systems Actas de las XXIII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2018), Sistedes, 2018. Abstract | Links | BibTeX | Tags: Design Time, MegaM@Rt2, Model-Driven Engineering (MDE), Runtime @conference{Gomez:JISBD:2018,
title = {The MegaM@Rt2 ECSEL Project: MegaModelling at Runtime \textemdash Scalable Model-based Framework for Continuous Development and Runtime Validation of Complex Systems},
author = {Abel G\'{o}mez and Orlando Avila-Garc\'{i}a and Jordi Cabot and Jos\'{e} Ram\'{o}n Ju\'{a}rez and Aitor Urbieta and Eugenio Villar},
editor = {Fernando S\'{a}nchez-Figueroa},
url = {http://hdl.handle.net/11705/JISBD/2018/023},
year = {2018},
date = {2018-09-17},
booktitle = {Actas de las XXIII Jornadas de Ingenier\'{i}a del Software y Bases de Datos (JISBD 2018)},
publisher = {Sistedes},
abstract = {A major challenge for the European electronic components and systems (ECS) industry is to increase productivity and reduce costs while ensuring safety and quality. Model-Driven Engineering (MDE) principles have already shown valuable capabilities for the development of ECSs but still need to scale to support real-world scenarios implied by the full deployment and use of complex electronic systems, such as Cyber-Physical Systems, and real-time systems. Moreover, maintaining efficient traceability, integration and communication between fundamental stages of the development lifecycle (i.e., design time and runtime) is another challenge to the scalability of MDE tools and techniques. This paper presents “MegaModelling at runtime \textemdash Scalable model-based framework for continuous development and runtime validation of complex systems” (MegaM@Rt2), an ECSEL\textendashJU project whose main goal is to address the above mentioned challenges. Driven by both large and small industrial enterprises, with the support of research partners and technology providers, MegaM@Rt2 aims to deliver a framework of tools and methods for: (i) system engineering/design and continuous development,(ii) related runtime analysis, and (iii) global model and traceability management.},
keywords = {Design Time, MegaM@Rt2, Model-Driven Engineering (MDE), Runtime},
pubstate = {published},
tppubtype = {conference}
}
A major challenge for the European electronic components and systems (ECS) industry is to increase productivity and reduce costs while ensuring safety and quality. Model-Driven Engineering (MDE) principles have already shown valuable capabilities for the development of ECSs but still need to scale to support real-world scenarios implied by the full deployment and use of complex electronic systems, such as Cyber-Physical Systems, and real-time systems. Moreover, maintaining efficient traceability, integration and communication between fundamental stages of the development lifecycle (i.e., design time and runtime) is another challenge to the scalability of MDE tools and techniques. This paper presents “MegaModelling at runtime — Scalable model-based framework for continuous development and runtime validation of complex systems” (MegaM@Rt2), an ECSEL–JU project whose main goal is to address the above mentioned challenges. Driven by both large and small industrial enterprises, with the support of research partners and technology providers, MegaM@Rt2 aims to deliver a framework of tools and methods for: (i) system engineering/design and continuous development,(ii) related runtime analysis, and (iii) global model and traceability management. Open Access |
Journal ArticleAmine Benelallam, Abel Gómez, Massimo Tisi, Jordi Cabot Distributing relational model transformation on MapReduce In: Journal of Systems and Software, vol. 142, pp. 1 - 20, 2018, ISSN: 0164-1212. Abstract | Links | BibTeX | Tags: ATL, Distributed Computing, MapReduce, Model Persistence, Model Transformation (MT), NeoEMF, Very Large Models (VLMs) @article{Benelallam:JSS:2018,
title = {Distributing relational model transformation on MapReduce},
author = {Amine Benelallam and Abel G\'{o}mez and Massimo Tisi and Jordi Cabot },
doi = {10.1016/j.jss.2018.04.014},
issn = {0164-1212},
year = {2018},
date = {2018-04-11},
journal = {Journal of Systems and Software},
volume = {142},
pages = {1 - 20},
abstract = {MDE has been successfully adopted in the production of software for several domains. As the models that need to be handled in MDE grow in scale, it becomes necessary to design scalable algorithms for model transformation (MT) as well as suitable frameworks for storing and retrieving models efficiently. One way to cope with scalability is to exploit the wide availability of distributed clusters in the Cloud for the parallel execution of MT. However, because of the dense interconnectivity of models and the complexity of transformation logic, the efficient use of these solutions in distributed model processing and persistence is not trivial. This paper exploits the high level of abstraction of an existing relational MT language, ATL, and the semantics of a distributed programming model, MapReduce, to build an ATL engine with implicitly distributed execution. The syntax of the language is not modified and no primitive for distribution is added. Efficient distribution of model elements is achieved thanks to a distributed persistence layer, specifically designed for relational MT. We demonstrate the effectiveness of our approach by making an implementation of our solution publicly available and using it to experimentally measure the speed-up of the transformation system while scaling to larger models and clusters.},
keywords = {ATL, Distributed Computing, MapReduce, Model Persistence, Model Transformation (MT), NeoEMF, Very Large Models (VLMs)},
pubstate = {published},
tppubtype = {article}
}
MDE has been successfully adopted in the production of software for several domains. As the models that need to be handled in MDE grow in scale, it becomes necessary to design scalable algorithms for model transformation (MT) as well as suitable frameworks for storing and retrieving models efficiently. One way to cope with scalability is to exploit the wide availability of distributed clusters in the Cloud for the parallel execution of MT. However, because of the dense interconnectivity of models and the complexity of transformation logic, the efficient use of these solutions in distributed model processing and persistence is not trivial. This paper exploits the high level of abstraction of an existing relational MT language, ATL, and the semantics of a distributed programming model, MapReduce, to build an ATL engine with implicitly distributed execution. The syntax of the language is not modified and no primitive for distribution is added. Efficient distribution of model elements is achieved thanks to a distributed persistence layer, specifically designed for relational MT. We demonstrate the effectiveness of our approach by making an implementation of our solution publicly available and using it to experimentally measure the speed-up of the transformation system while scaling to larger models and clusters. |
Journal ArticleSimona Bernardi, Juan L. Domínguez, Abel Gómez, Christophe Joubert, José Merseguer, Diego Perez-Palacin, José I. Requeno, Alberto Romeu A systematic approach for performance assessment using process mining: An industrial experience report In: Empirical Software Engineering, vol. 23, no. 6, pp. 3394–3441, 2018, ISSN: 1573-7616. Abstract | Links | BibTeX | Tags: DICE, Experience Report, Modeling and Analysis of Real Time and Embedded systems (MARTE), Petri net (PN), Process Mining, Simulation, Software Perfomance, Unified Modeling Language (UML) @article{Bernardi:EmSE:2018,
title = {A systematic approach for performance assessment using process mining: An industrial experience report},
author = {Simona Bernardi and Juan L. Dom\'{i}nguez and Abel G\'{o}mez and Christophe Joubert and Jos\'{e} Merseguer and Diego Perez-Palacin and Jos\'{e} I. Requeno and Alberto Romeu},
url = {http://rdcu.be/Jz3J},
doi = {10.1007/s10664-018-9606-9},
issn = {1573-7616},
year = {2018},
date = {2018-03-21},
journal = {Empirical Software Engineering},
volume = {23},
number = {6},
pages = {3394--3441},
abstract = {Software performance engineering is a mature field that offers methods to assess system performance. Process mining is a promising research field applied to gain insight on system processes. The interplay of these two fields opens promising applications in the industry. In this work, we report our experience applying a methodology, based on process mining techniques, for the performance assessment of a commercial data-intensive software application. The methodology has successfully assessed the scalability of future versions of this system. Moreover, it has identified bottlenecks components and replication needs for fulfilling business rules. The system, an integrated port operations management system, has been developed by Prodevelop, a medium-sized software enterprise with high expertise in geospatial technologies. The performance assessment has been carried out by a team composed by practitioners and researchers. Finally, the paper offers a deep discussion on the lessons learned during the experience, that will be useful for practitioners to adopt the methodology and for researcher to find new routes.},
keywords = {DICE, Experience Report, Modeling and Analysis of Real Time and Embedded systems (MARTE), Petri net (PN), Process Mining, Simulation, Software Perfomance, Unified Modeling Language (UML)},
pubstate = {published},
tppubtype = {article}
}
Software performance engineering is a mature field that offers methods to assess system performance. Process mining is a promising research field applied to gain insight on system processes. The interplay of these two fields opens promising applications in the industry. In this work, we report our experience applying a methodology, based on process mining techniques, for the performance assessment of a commercial data-intensive software application. The methodology has successfully assessed the scalability of future versions of this system. Moreover, it has identified bottlenecks components and replication needs for fulfilling business rules. The system, an integrated port operations management system, has been developed by Prodevelop, a medium-sized software enterprise with high expertise in geospatial technologies. The performance assessment has been carried out by a team composed by practitioners and researchers. Finally, the paper offers a deep discussion on the lessons learned during the experience, that will be useful for practitioners to adopt the methodology and for researcher to find new routes. |
Conference Connie U. Smith, Vittorio Cortellessa, Abel Gómez, Samuel Kounev, Catalina Lladó, Murray Woodside Challenges in Automating Performance Tool Support Companion of the 2018 ACM/SPEC International Conference on Performance Engineering, ICPE '18 ACM, Berlin, Germany, 2018, ISBN: 978-1-4503-5629-9. Abstract | Links | BibTeX | Tags: modeling tools, performance evaluation tools, software performance @conference{Smith:WOSPC:2018,
title = {Challenges in Automating Performance Tool Support},
author = { Connie U. Smith and Vittorio Cortellessa and Abel G\'{o}mez and Samuel Kounev and Catalina Llad\'{o} and Murray Woodside},
url = {https://abel.gomez.llana.me/wp-content/uploads/2018/06/smith-2018-wospc.pdf},
doi = {10.1145/3185768.3186410},
isbn = {978-1-4503-5629-9},
year = {2018},
date = {2018-01-01},
booktitle = {Companion of the 2018 ACM/SPEC International Conference on Performance Engineering},
pages = {175--176},
publisher = {ACM},
address = {Berlin, Germany},
series = {ICPE '18},
abstract = {Research and development (R\&D) of new tools for performance analysis faces many challenges from immaturity and lack of documentation of supporting tools and infrastructure, incompatibility of tools, lack of access to realistic case studies and performance parameters for them, validation of results, time required versus benefit of results, subsequent maintenance, and many, many others. Yet tool development is an essential part of practical R\&D. The panelists relay experiences in developing tools, discuss what needs improvement, opportunities in developing R\&D tools, and offer advice for researchers. After introductory remarks from each panelist, there will be a discussion session with the audience.},
keywords = {modeling tools, performance evaluation tools, software performance},
pubstate = {published},
tppubtype = {conference}
}
Research and development (R&D) of new tools for performance analysis faces many challenges from immaturity and lack of documentation of supporting tools and infrastructure, incompatibility of tools, lack of access to realistic case studies and performance parameters for them, validation of results, time required versus benefit of results, subsequent maintenance, and many, many others. Yet tool development is an essential part of practical R&D. The panelists relay experiences in developing tools, discuss what needs improvement, opportunities in developing R&D tools, and offer advice for researchers. After introductory remarks from each panelist, there will be a discussion session with the audience. Full Text Available |
Journal ArticleWasif Afzal, Hugo Bruneliere, Davide Di Ruscio, Andrey Sadovykh, Silvia Mazzini, Eric Cariou, Dragos Truscan, Jordi Cabot, Abel Gómez, Jesús Gorroñogoitia, Luigi Pomante, Pavel Smrz The MegaM@Rt2 ECSEL project: MegaModelling at Runtime – Scalable model-based framework for continuous development and runtime validation of complex systems In: Microprocessors and Microsystems, vol. 61, pp. 86 - 95, 2018, ISSN: 0141-9331. Abstract | Links | BibTeX | Tags: Design Time, MegaM@Rt2, Megamodelling, Model-Driven Engineering (MDE), Runtime @article{Afzal:MICPRO:2018,
title = {The MegaM@Rt2 ECSEL project: MegaModelling at Runtime \textendash Scalable model-based framework for continuous development and runtime validation of complex systems},
author = {Wasif Afzal and Hugo Bruneliere and Davide Di Ruscio and Andrey Sadovykh and Silvia Mazzini and Eric Cariou and Dragos Truscan and Jordi Cabot and Abel G\'{o}mez and Jes\'{u}s Gorro\~{n}ogoitia and Luigi Pomante and Pavel Smrz},
url = {https://abel.gomez.llana.me/wp-content/uploads/2018/06/afzal-2018-megamart2.pdf},
doi = {10.1016/j.micpro.2018.05.010},
issn = {0141-9331},
year = {2018},
date = {2018-01-01},
journal = {Microprocessors and Microsystems},
volume = {61},
pages = {86 - 95},
abstract = {A major challenge for the European electronic industry is to enhance productivity by ensuring quality of development, integration and maintenance while reducing the associated costs. Model-Driven Engineering (MDE) principles and techniques have already shown promising capabilities, but they still need to scale up to support real-world scenarios implied by the full deployment and use of complex electronic components and systems. Moreover, maintaining efficient traceability, integration, and communication between two fundamental system life cycle phases (design time and runtime) is another challenge requiring the scalability of MDE. This paper presents an overview of the ECSEL1 project entitled “MegaModelling at runtime \textendash Scalable model-based framework for continuous development and runtime validation of complex systems” (MegaM@Rt2), whose aim is to address the above mentioned challenges facing MDE. Driven by both large and small industrial enterprises, with the support of research partners and technology providers, MegaM@Rt2 aims to deliver a framework of tools and methods for: 1) system engineering/design and continuous development, 2) related runtime analysis and 3) global models and traceability management. Diverse industrial use cases (covering strategic domains such as aeronautics, railway, construction and telecommunications) will integrate and demonstrate the validity of the MegaM@Rt2 solution. This paper provides an overview of the MegaM@Rt2 project with respect to its approach, mission, objectives as well as to its implementation details. It further introduces the consortium as well as describes the work packages and few already produced deliverables.},
keywords = {Design Time, MegaM@Rt2, Megamodelling, Model-Driven Engineering (MDE), Runtime},
pubstate = {published},
tppubtype = {article}
}
A major challenge for the European electronic industry is to enhance productivity by ensuring quality of development, integration and maintenance while reducing the associated costs. Model-Driven Engineering (MDE) principles and techniques have already shown promising capabilities, but they still need to scale up to support real-world scenarios implied by the full deployment and use of complex electronic components and systems. Moreover, maintaining efficient traceability, integration, and communication between two fundamental system life cycle phases (design time and runtime) is another challenge requiring the scalability of MDE. This paper presents an overview of the ECSEL1 project entitled “MegaModelling at runtime – Scalable model-based framework for continuous development and runtime validation of complex systems” (MegaM@Rt2), whose aim is to address the above mentioned challenges facing MDE. Driven by both large and small industrial enterprises, with the support of research partners and technology providers, MegaM@Rt2 aims to deliver a framework of tools and methods for: 1) system engineering/design and continuous development, 2) related runtime analysis and 3) global models and traceability management. Diverse industrial use cases (covering strategic domains such as aeronautics, railway, construction and telecommunications) will integrate and demonstrate the validity of the MegaM@Rt2 solution. This paper provides an overview of the MegaM@Rt2 project with respect to its approach, mission, objectives as well as to its implementation details. It further introduces the consortium as well as describes the work packages and few already produced deliverables. Full Text AvailablePreprint |
2017
|
ConferenceAbel Gómez, Xabier Mendialdua, Gábor Bergmann, Jordi Cabot, Csaba Debreceni, Antonio Garmendia, Dimitrios S. Kolovos, Juan de Lara, Salvador Trujillo On the Opportunities of Scalable Modeling Technologies: An Experience Report on Wind Turbines Control Applications Development Modelling Foundations and Applications: 13th European Conference, ECMFA 2017, Held as Part of STAF 2017, Marburg, Germany, July 19-20, 2017, Proceedings, vol. 10376, Lecture Notes in Computer Science Springer International Publishing, 2017, ISBN: 978-3-319-61482-3. Abstract | Links | BibTeX | Tags: Experience Report, Model-Driven Engineering (MDE), MONDO, Scalability @conference{Gomez:ECMFA:2017,
title = {On the Opportunities of Scalable Modeling Technologies: An Experience Report on Wind Turbines Control Applications Development},
author = {Abel G\'{o}mez and Xabier Mendialdua and G\'{a}bor Bergmann and Jordi Cabot and Csaba Debreceni and Antonio Garmendia and Dimitrios S. Kolovos and Juan de Lara and Salvador Trujillo},
editor = {Anthony Anjorin and Hu\'{a}scar Espinoza},
doi = {10.1007/978-3-319-61482-3_18},
isbn = {978-3-319-61482-3},
year = {2017},
date = {2017-06-20},
booktitle = {Modelling Foundations and Applications: 13th European Conference, ECMFA 2017, Held as Part of STAF 2017, Marburg, Germany, July 19-20, 2017, Proceedings},
volume = {10376},
pages = {300--315},
publisher = {Springer International Publishing},
series = {Lecture Notes in Computer Science},
abstract = {Scalability in modeling has many facets, including the ability to build larger models and domain specific languages (DSLs) efficiently. With the aim of tackling some of the most prominent scalability challenges in Model-based Engineering (MBE), the MONDO EU project developed the theoretical foundations and open-source implementation of a platform for scalable modeling and model management. The platform includes facilities for building large DSLs, for splitting large models into sets of smaller interrelated fragments, and enables modelers to construct and refine complex models collaboratively, among other features.},
keywords = {Experience Report, Model-Driven Engineering (MDE), MONDO, Scalability},
pubstate = {published},
tppubtype = {conference}
}
Scalability in modeling has many facets, including the ability to build larger models and domain specific languages (DSLs) efficiently. With the aim of tackling some of the most prominent scalability challenges in Model-based Engineering (MBE), the MONDO EU project developed the theoretical foundations and open-source implementation of a platform for scalable modeling and model management. The platform includes facilities for building large DSLs, for splitting large models into sets of smaller interrelated fragments, and enables modelers to construct and refine complex models collaboratively, among other features. |
ConferenceZinovy Diskin, Abel Gómez, Jordi Cabot Traceability Mappings as a Fundamental Instrument in Model Transformations Fundamental Approaches to Software Engineering: 20th International Conference, FASE 2017, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2017, Uppsala, Sweden, April 22-29, 2017, Proceedings, vol. 10202, Lecture Notes in Computer Science Springer, Berlin, Heidelberg, 2017, ISBN: 978-3-662-54494-5. Abstract | Links | BibTeX | Tags: ATL, Category Theory, Model Transformation (MT), Traceability @conference{Diskin:FASE:2017,
title = {Traceability Mappings as a Fundamental Instrument in Model Transformations},
author = {Zinovy Diskin and Abel G\'{o}mez and Jordi Cabot },
editor = {Marieke Huisman and Julia Rubin},
doi = {10.1007/978-3-662-54494-5_14},
isbn = {978-3-662-54494-5},
year = {2017},
date = {2017-03-22},
booktitle = {Fundamental Approaches to Software Engineering: 20th International Conference, FASE 2017, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2017, Uppsala, Sweden, April 22-29, 2017, Proceedings},
volume = {10202},
pages = {247--263},
publisher = {Springer},
address = {Berlin, Heidelberg},
series = {Lecture Notes in Computer Science},
abstract = {Technological importance of traceability mappings for model transformations is well-known, but they have often been considered as an auxiliary element generated during the transformation execution and providing accessory information. This paper argues that traceability mappings should instead be regarded as a core aspect of the transformation definition, and a key instrument in the transformation management.},
keywords = {ATL, Category Theory, Model Transformation (MT), Traceability},
pubstate = {published},
tppubtype = {conference}
}
Technological importance of traceability mappings for model transformations is well-known, but they have often been considered as an auxiliary element generated during the transformation execution and providing accessory information. This paper argues that traceability mappings should instead be regarded as a core aspect of the transformation definition, and a key instrument in the transformation management. |
Journal ArticleGwendal Daniel, Gerson Sunyé, Amine Benelallam, Massimo Tisi, Yoann Vernageau, Abel Gómez, Jordi Cabot NeoEMF: A multi-database model persistence framework for very large models In: Science of Computer Programming, vol. 149, no. Supplement C, pp. 9 - 14, 2017, ISSN: 0167-6423, (Special Issue on MODELS'16). Abstract | Links | BibTeX | Tags: Model Persistence, NeoEMF, Scalability, Very Large Models (VLMs) @article{Daniel:SciCo:2017,
title = {NeoEMF: A multi-database model persistence framework for very large models},
author = {Gwendal Daniel and Gerson Suny\'{e} and Amine Benelallam and Massimo Tisi and Yoann Vernageau and Abel G\'{o}mez and Jordi Cabot},
doi = {10.1016/j.scico.2017.08.002},
issn = {0167-6423},
year = {2017},
date = {2017-01-01},
journal = {Science of Computer Programming},
volume = {149},
number = {Supplement C},
pages = {9 - 14},
abstract = {The growing role of Model Driven Engineering (MDE) techniques in industry has emphasized scalability of existing model persistence solutions as a major issue. Specifically, there is a need to store, query, and transform very large models in an efficient way. Several persistence solutions based on relational and NoSQL databases have been proposed to achieve scalability. However, they often rely on a single data store, which suits a specific modeling activity, but may not be optimized for other use cases. This paper presents NeoEMF, a tool that tackles this issue by providing a multi-database model persistence framework. Tool website: http://www.neoemf.com},
note = {Special Issue on MODELS'16},
keywords = {Model Persistence, NeoEMF, Scalability, Very Large Models (VLMs)},
pubstate = {published},
tppubtype = {article}
}
The growing role of Model Driven Engineering (MDE) techniques in industry has emphasized scalability of existing model persistence solutions as a major issue. Specifically, there is a need to store, query, and transform very large models in an efficient way. Several persistence solutions based on relational and NoSQL databases have been proposed to achieve scalability. However, they often rely on a single data store, which suits a specific modeling activity, but may not be optimized for other use cases. This paper presents NeoEMF, a tool that tackles this issue by providing a multi-database model persistence framework. Tool website: http://www.neoemf.com |
2016
|
ConferenceGwendal Daniel, Gerson Sunyé, Amine Benelallam, Massimo Tisi, Yoann Vernageau, Abel Gómez, Jordi Cabot NeoEMF: a Multi-database Model Persistence Framework for Very Large Models Proceedings of the MoDELS 2016 Demo and Poster Sessions co-located with ACM/IEEE 19th International Conference on Model Driven Engineering Languages and Systems (MoDELS 2016), Saint-Malo, France, October 2-7, 2016., vol. 1725, CEUR Workshop Proceedings, Saint-Malo, France, 2016, ISSN: 1613-0073. Abstract | Links | BibTeX | Tags: Model Persistence, NeoEMF, Scalability, Very Large Models (VLMs) @conference{Daniel:MODELS:2016,
title = {NeoEMF: a Multi-database Model Persistence Framework for Very Large Models},
author = {Gwendal Daniel and Gerson Suny\'{e} and Amine Benelallam and Massimo Tisi and Yoann Vernageau and Abel G\'{o}mez and Jordi Cabot},
editor = {Juan de Lara and Peter J. Clarke and Mehrdad Sabetzadeh},
url = {http://ceur-ws.org/Vol-1725/demo1.pdf},
issn = {1613-0073},
year = {2016},
date = {2016-11-11},
booktitle = {Proceedings of the MoDELS 2016 Demo and Poster Sessions co-located with ACM/IEEE 19th International Conference on Model Driven Engineering Languages and Systems (MoDELS 2016), Saint-Malo, France, October 2-7, 2016.},
volume = {1725},
pages = {1-7},
publisher = {CEUR Workshop Proceedings},
address = {Saint-Malo, France},
abstract = {The growing use of Model Driven Engineering (MDE) techniques in industry has emphasized scalability of existing model persistence solutions as a major issue. Specifically,
there is a need to store, query, and transform very large models in an efficient way.
Several persistence solutions based on relational and NoSQL databases have been proposed to achieve scalability. However, existing solutions often rely on a single data store, which suits a specific modeling activity, but may not be optimized for other use cases.
In this article we present NeoEMF, a multi-database model persistence framework able to store very large models in key-value stores, graph databases, and wide column databases. We introduce NeoEMF core features, and present the different data stores and their applications. NeoEMF is open source and available online.},
keywords = {Model Persistence, NeoEMF, Scalability, Very Large Models (VLMs)},
pubstate = {published},
tppubtype = {conference}
}
The growing use of Model Driven Engineering (MDE) techniques in industry has emphasized scalability of existing model persistence solutions as a major issue. Specifically,
there is a need to store, query, and transform very large models in an efficient way.
Several persistence solutions based on relational and NoSQL databases have been proposed to achieve scalability. However, existing solutions often rely on a single data store, which suits a specific modeling activity, but may not be optimized for other use cases.
In this article we present NeoEMF, a multi-database model persistence framework able to store very large models in key-value stores, graph databases, and wide column databases. We introduce NeoEMF core features, and present the different data stores and their applications. NeoEMF is open source and available online. Open Access |
ConferenceAbel Gómez, José Merseguer Una herramienta para evaluar el rendimiento de aplicaciones intensivas en datos Actas de las XXI Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2016), SISTEDES, Salamanca, Spain, 2016. Abstract | Links | BibTeX | Tags: Computer Aided Design (CASE), Data-Intensive Applications (DIA), DICE, Model-Driven Engineering (MDE), Modeling and Analysis of Real Time and Embedded systems (MARTE), Petri net (PN), Simulation, UML Profiles, Unified Modeling Language (UML) @conference{Gomez:JISBD:2016,
title = {Una herramienta para evaluar el rendimiento de aplicaciones intensivas en datos},
author = {Abel G\'{o}mez and Jos\'{e} Merseguer},
editor = {Jes\'{u}s Garc\'{i}a Molina},
url = {http://hdl.handle.net/11705/JISBD/2016/026},
year = {2016},
date = {2016-09-13},
booktitle = {Actas de las XXI Jornadas de Ingenier\'{i}a del Software y Bases de Datos (JISBD 2016)},
publisher = {SISTEDES},
address = {Salamanca, Spain},
abstract = {Las aplicaciones intensivas en datos (AID) que usan tecnolog\'{i}as de Big Data se est\'{a}n convirtiendo en una parte importante del mercado de desarrollo de software. Sin embargo, las t\'{e}cnicas --y su automatizaci\'{o}n-- para el asesoramiento de la calidad para este tipo de aplicaciones es claramente insuficiente. El proyecto DICE H2020 tiene como objetivo definir metodolog\'{i}as y crear herramientas para desarrollar y monitorizar AID mediante t\'{e}cnicas de ingenier\'{i}a dirigida por modelos. En este art\'{i}culo presentamos un componente clave del proyecto DICE: su herramienta de simulaci\'{o}n. Esta herramienta es capaz de evaluar el rendimiento de AID simulando su comportamiento mediante modelos de redes de Petri. Como complemento, existe a disposici\'{o}n un v\'{i}deo mostrando la herramienta en http://tiny.cc/z1qzay.},
keywords = {Computer Aided Design (CASE), Data-Intensive Applications (DIA), DICE, Model-Driven Engineering (MDE), Modeling and Analysis of Real Time and Embedded systems (MARTE), Petri net (PN), Simulation, UML Profiles, Unified Modeling Language (UML)},
pubstate = {published},
tppubtype = {conference}
}
Las aplicaciones intensivas en datos (AID) que usan tecnologías de Big Data se están convirtiendo en una parte importante del mercado de desarrollo de software. Sin embargo, las técnicas --y su automatización-- para el asesoramiento de la calidad para este tipo de aplicaciones es claramente insuficiente. El proyecto DICE H2020 tiene como objetivo definir metodologías y crear herramientas para desarrollar y monitorizar AID mediante técnicas de ingeniería dirigida por modelos. En este artículo presentamos un componente clave del proyecto DICE: su herramienta de simulación. Esta herramienta es capaz de evaluar el rendimiento de AID simulando su comportamiento mediante modelos de redes de Petri. Como complemento, existe a disposición un vídeo mostrando la herramienta en http://tiny.cc/z1qzay. Open AccessSpanish |
ConferenceAbel Gómez, José Merseguer, Elisabetta Di Nitto, Damian A. Tamburri Towards a UML Profile for Data Intensive Applications Proceedings of the 2nd International Workshop on Quality-Aware DevOps, co-located with ACM SIGSOFT International Symposium on Software Testing and Analysis 2016 (ISSTA'16), QUDOS 2016 ACM, New York, NY, USA, 2016, ISBN: 978-1-4503-4411-1, (Saarbrücken, Germany). Abstract | Links | BibTeX | Tags: Computer Aided Design (CASE), Data-Intensive Applications (DIA), DICE, Model-Driven Engineering (MDE), Modeling and Analysis of Real Time and Embedded systems (MARTE), UML Profiles, Unified Modeling Language (UML) @conference{Gomez:QUDOS:2016,
title = {Towards a UML Profile for Data Intensive Applications},
author = {Abel G\'{o}mez and Jos\'{e} Merseguer and Elisabetta Di Nitto and Damian A. Tamburri},
doi = {10.1145/2945408.2945412},
isbn = {978-1-4503-4411-1},
year = {2016},
date = {2016-07-21},
booktitle = {Proceedings of the 2nd International Workshop on Quality-Aware DevOps, co-located with ACM SIGSOFT International Symposium on Software Testing and Analysis 2016 (ISSTA'16)},
pages = {18--23},
publisher = {ACM},
address = {New York, NY, USA},
series = {QUDOS 2016},
abstract = {Data intensive applications that leverage Big Data technologies are rapidly gaining market trend. However, their design and quality assurance are far from satisfying software engineers needs. In fact, a CapGemini research shows that only 13% of organizations have achieved full-scale production for their Big Data implementations. We aim at addressing an early design and a quality evaluation of data intensive applications,being our goal to help software engineers on assessing quality metrics, such as the response time of theapplication. We address this goal by means of a quality analysis tool-chain.At the core of the tool, we are developing a Profile that converts the Unified Modeling Language into a domain specific modeling language for quality evaluation of data intensive applications. },
note = {Saarbr\"{u}cken, Germany},
keywords = {Computer Aided Design (CASE), Data-Intensive Applications (DIA), DICE, Model-Driven Engineering (MDE), Modeling and Analysis of Real Time and Embedded systems (MARTE), UML Profiles, Unified Modeling Language (UML)},
pubstate = {published},
tppubtype = {conference}
}
Data intensive applications that leverage Big Data technologies are rapidly gaining market trend. However, their design and quality assurance are far from satisfying software engineers needs. In fact, a CapGemini research shows that only 13% of organizations have achieved full-scale production for their Big Data implementations. We aim at addressing an early design and a quality evaluation of data intensive applications,being our goal to help software engineers on assessing quality metrics, such as the response time of theapplication. We address this goal by means of a quality analysis tool-chain.At the core of the tool, we are developing a Profile that converts the Unified Modeling Language into a domain specific modeling language for quality evaluation of data intensive applications. |
ConferenceAbel Gómez, Christophe Joubert, José Merseguer A Tool for Assessing Performance Requirements of Data-Intensive Applications Actas de las XXIV Jornadas de Concurrencia y Sistemas Distribuidos (JCSD 2016), Godel S. L., Granada, Spain, 2016, ISBN: 978-84-16478-90-3. Abstract | Links | BibTeX | Tags: Computer Aided Design (CASE), Data-Intensive Applications (DIA), DICE, Modeling and Analysis of Real Time and Embedded systems (MARTE), Petri net (PN), Posidonia Operations, UML Profiles, Unified Modeling Language (UML) @conference{Gomez:JCSD:2016,
title = {A Tool for Assessing Performance Requirements of Data-Intensive Applications},
author = {Abel G\'{o}mez and Christophe Joubert and Jos\'{e} Merseguer },
editor = {Miguel J. Hornos Barranco},
url = {https://abel.gomez.llana.me/wp-content/uploads/2017/11/gomez-jcsd-2016.pdf},
isbn = {978-84-16478-90-3},
year = {2016},
date = {2016-06-15},
booktitle = {Actas de las XXIV Jornadas de Concurrencia y Sistemas Distribuidos (JCSD 2016)},
pages = {159--169},
publisher = {Godel S. L.},
address = {Granada, Spain},
abstract = {Big Data is becoming a core asset for present economy and businesses, and as such, Data-Intensive Applications (DIA) that use Big Data technologies are becoming crucial products in the software development market. However, quality assurance of such applications is still an open issue. The H2020 DICE project aims to define a quality-driven framework for developing DIA based on model-driven engineering (MDE) techniques. In this paper we present a key component of the DICE Framework, the DICE Simulation Tool. The tool is able to simulate the behavior of a DIA to assess its performance using a Petri net model. To showcase its capabilities we use the Posidonia Operations case study, a real-world scenario brought from one of our industrial partners. In addition to this paper, a video demonstrating the tool is available at http://tiny.cc/z1qzay.
},
keywords = {Computer Aided Design (CASE), Data-Intensive Applications (DIA), DICE, Modeling and Analysis of Real Time and Embedded systems (MARTE), Petri net (PN), Posidonia Operations, UML Profiles, Unified Modeling Language (UML)},
pubstate = {published},
tppubtype = {conference}
}
Big Data is becoming a core asset for present economy and businesses, and as such, Data-Intensive Applications (DIA) that use Big Data technologies are becoming crucial products in the software development market. However, quality assurance of such applications is still an open issue. The H2020 DICE project aims to define a quality-driven framework for developing DIA based on model-driven engineering (MDE) techniques. In this paper we present a key component of the DICE Framework, the DICE Simulation Tool. The tool is able to simulate the behavior of a DIA to assess its performance using a Petri net model. To showcase its capabilities we use the Posidonia Operations case study, a real-world scenario brought from one of our industrial partners. In addition to this paper, a video demonstrating the tool is available at http://tiny.cc/z1qzay.
Open Access |
ConferenceHamza Ed-douibi, Javier Luis Cánovas Izquierdo, Abel Gómez, Massimo Tisi, Jordi Cabot EMF-REST: Generation of RESTful APIs from Models Proceedings of the 31st Annual ACM Symposium on Applied Computing, SAC '16 ACM, New York, NY, USA, 2016, ISBN: 978-1-4503-3739-7, (Pisa, Italia). Abstract | Links | BibTeX | Tags: Domain-Specific Languages (DSLs), Model-Driven Engineering (MDE), Model-Driven Web Engineering (MDWE), REST @conference{Ed-douibi:SAC:2016,
title = {EMF-REST: Generation of RESTful APIs from Models},
author = {Hamza Ed-douibi and Javier Luis C\'{a}novas Izquierdo and Abel G\'{o}mez and Massimo Tisi and Jordi Cabot},
doi = {10.1145/2851613.2851782},
isbn = {978-1-4503-3739-7},
year = {2016},
date = {2016-04-04},
booktitle = {Proceedings of the 31st Annual ACM Symposium on Applied Computing},
pages = {1446--1453},
publisher = {ACM},
address = {New York, NY, USA},
series = {SAC '16},
abstract = {In the last years, there has been an increasing interest for Model-Driven Engineering (MDE) solutions in the Web. Web-based modeling solutions can leverage on better support for distributed management (i.e., the Cloud) and collaboration. However, current modeling environments and frameworks are usually restricted to desktop-based scenarios and therefore their capabilities to move to the Web are still very limited. In this paper we present an approach to generate Web APIs out of models, thus paving the way for managing models and collaborating on them online. The approach, called EMF-REST, takes Eclipse Modeling Framework (EMF) data models as input and generates Web APIs following the REST principles and relying on well-known libraries and standards, thus facilitating its comprehension and maintainability. Also, EMF-REST integrates model and Web-specific features to provide model validation and security capabilities, respectively, to the generated API.},
note = {Pisa, Italia},
keywords = {Domain-Specific Languages (DSLs), Model-Driven Engineering (MDE), Model-Driven Web Engineering (MDWE), REST},
pubstate = {published},
tppubtype = {conference}
}
In the last years, there has been an increasing interest for Model-Driven Engineering (MDE) solutions in the Web. Web-based modeling solutions can leverage on better support for distributed management (i.e., the Cloud) and collaboration. However, current modeling environments and frameworks are usually restricted to desktop-based scenarios and therefore their capabilities to move to the Web are still very limited. In this paper we present an approach to generate Web APIs out of models, thus paving the way for managing models and collaborating on them online. The approach, called EMF-REST, takes Eclipse Modeling Framework (EMF) data models as input and generates Web APIs following the REST principles and relying on well-known libraries and standards, thus facilitating its comprehension and maintainability. Also, EMF-REST integrates model and Web-specific features to provide model validation and security capabilities, respectively, to the generated API. |
2015
|
ConferenceAbel Gómez, Amine Benelallam, Massimo Tisi Decentralized Model Persistence for Distributed Computing Proceedings of the 3rd Workshop on Scalable Model Driven Engineering part of the Software Technologies: Applications and Foundations (STAF 2015) federation of conferences, vol. 1406, CEUR Workshop Proceedings, L'Aquila, Italy, 2015, ISBN: 1613-0073. Abstract | Links | BibTeX | Tags: Distributed Computing, Distributed Persistence, HBase, Key-Value Stores, Model Persistence, NeoEMF @conference{Gomez:ECMFA:2015,
title = {Decentralized Model Persistence for Distributed Computing},
author = {Abel G\'{o}mez and Amine Benelallam and Massimo Tisi},
editor = {Dimitris Kolovos and Davide Di Ruscio and Nicholas Matragkas and Jes\'{u}s S\'{a}nchez Cuadrado and Istv\'{a}n R\'{a}th and Massimo Tisi },
url = {http://ceur-ws.org/Vol-1406/paper5.pdf},
isbn = {1613-0073},
year = {2015},
date = {2015-07-21},
booktitle = {Proceedings of the 3rd Workshop on Scalable Model Driven Engineering part of the Software Technologies: Applications and Foundations (STAF 2015) federation of conferences},
volume = {1406},
pages = {42-51},
publisher = {CEUR Workshop Proceedings},
address = {L'Aquila, Italy},
abstract = {The necessity of manipulating very large amounts of data and the wide availability of computational resources on the Cloud is boosting the popularity of distributed computing in industry. The applicability of model-driven engineering in such scenarios is hampered today by the lack of an efficient model-persistence framework for distributed computing. In this paper we present NeoEMF/HBase, a persistence backend for the Eclipse Modeling Framework (EMF) built on top of the Apache HBase data store. Model distribution is hidden from client applications, that are transparently provided with the model elements they navigate. Access to remote model elements is decentralized, avoiding the bottleneck of a single access point. The persistence model is based on key-value stores that allow for efficient on-demand model persistence.},
keywords = {Distributed Computing, Distributed Persistence, HBase, Key-Value Stores, Model Persistence, NeoEMF},
pubstate = {published},
tppubtype = {conference}
}
The necessity of manipulating very large amounts of data and the wide availability of computational resources on the Cloud is boosting the popularity of distributed computing in industry. The applicability of model-driven engineering in such scenarios is hampered today by the lack of an efficient model-persistence framework for distributed computing. In this paper we present NeoEMF/HBase, a persistence backend for the Eclipse Modeling Framework (EMF) built on top of the Apache HBase data store. Model distribution is hidden from client applications, that are transparently provided with the model elements they navigate. Access to remote model elements is decentralized, avoiding the bottleneck of a single access point. The persistence model is based on key-value stores that allow for efficient on-demand model persistence. Open Access |
ConferenceAbel Gómez, Massimo Tisi, Gerson Sunyé, Jordi Cabot Map-Based Transparent Persistence for Very Large Models Fundamental Approaches to Software Engineering: 18th International Conference, FASE 2015, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2015, London, UK, April 11-18, 2015, Proceedings, vol. 9033, Lecture Notes in Computer Science Springer Berlin Heidelberg, 2015, ISBN: 978-3-662-46675-9. Abstract | Links | BibTeX | Tags: Key-Value Stores, Model Persistence, Model-Driven Engineering (MDE), NeoEMF, Very Large Models (VLMs) @conference{Gomez:FASE:2015,
title = {Map-Based Transparent Persistence for Very Large Models},
author = {Abel G\'{o}mez and Massimo Tisi and Gerson Suny\'{e} and Jordi Cabot},
editor = {Alexander Egyed and Ina Schaefer },
doi = {10.1007/978-3-662-46675-9_2},
isbn = {978-3-662-46675-9},
year = {2015},
date = {2015-04-11},
booktitle = {Fundamental Approaches to Software Engineering: 18th International Conference, FASE 2015, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2015, London, UK, April 11-18, 2015, Proceedings},
volume = {9033},
pages = {19--34},
publisher = {Springer Berlin Heidelberg},
series = {Lecture Notes in Computer Science},
abstract = {The progressive industrial adoption of Model-Driven Engineering (MDE) is fostering the development of large tool ecosystems like the Eclipse Modeling project. These tools are built on top of a set of base technologies that have been primarily designed for small-scale scenarios, where models are manually developed. In particular, efficient runtime manipulation for large-scale models is an under-studied problem and this is hampering the application of MDE to several industrial scenarios.
In this paper we introduce and evaluate a map-based persistence model for MDE tools. We use this model to build a transparent persistence layer for modeling tools, on top of a map-based database engine. The layer can be plugged into the Eclipse Modeling Framework, lowering execution times and memory consumption levels of other existing approaches. Empirical tests are performed based on a typical industrial scenario, model-driven reverse engineering, where very large software models originate from the analysis of massive code bases. The layer is freely distributed and can be immediately used for enhancing the scalability of any existing Eclipse Modeling tool.},
keywords = {Key-Value Stores, Model Persistence, Model-Driven Engineering (MDE), NeoEMF, Very Large Models (VLMs)},
pubstate = {published},
tppubtype = {conference}
}
The progressive industrial adoption of Model-Driven Engineering (MDE) is fostering the development of large tool ecosystems like the Eclipse Modeling project. These tools are built on top of a set of base technologies that have been primarily designed for small-scale scenarios, where models are manually developed. In particular, efficient runtime manipulation for large-scale models is an under-studied problem and this is hampering the application of MDE to several industrial scenarios.
In this paper we introduce and evaluate a map-based persistence model for MDE tools. We use this model to build a transparent persistence layer for modeling tools, on top of a map-based database engine. The layer can be plugged into the Eclipse Modeling Framework, lowering execution times and memory consumption levels of other existing approaches. Empirical tests are performed based on a typical industrial scenario, model-driven reverse engineering, where very large software models originate from the analysis of massive code bases. The layer is freely distributed and can be immediately used for enhancing the scalability of any existing Eclipse Modeling tool. |
ConferenceAmine Benelallam, Abel Gómez, Massimo Tisi ATL-MR: Model Transformation on MapReduce Proceedings of the 2Nd International Workshop on Software Engineering for Parallel Systems, SEPS 2015 ACM, New York, NY, USA, 2015, ISBN: 978-1-4503-3910-0, (Pittsburgh, PA, USA). Abstract | Links | BibTeX | Tags: ATL, Distributed Computing, MapReduce, Model Transformation (MT), Tool @conference{Benelallam:SEPS:2015,
title = {ATL-MR: Model Transformation on MapReduce},
author = {Amine Benelallam and Abel G\'{o}mez and Massimo Tisi},
doi = {10.1145/2837476.2837482},
isbn = {978-1-4503-3910-0},
year = {2015},
date = {2015-01-01},
booktitle = {Proceedings of the 2Nd International Workshop on Software Engineering for Parallel Systems},
pages = {45--49},
publisher = {ACM},
address = {New York, NY, USA},
series = {SEPS 2015},
abstract = {The Model-Driven Engineering (MDE) paradigm has been successfully embraced for manufacturing maintainable software in several domains while decreasing costs and efforts. One of its principal concepts is rule-based Model Transformation (MT) that enables an automated processing of models for different intentions. The user-friendly syntax of MT languages is designed for allowing users to specify and execute these operations in an effortless manner. Existing MT engines, however, are incapable of accomplishing transformation operations in an acceptable time while facing complex transformations. Worse, against large amount of data, these tools crash throwing an out of memory exception. In this paper, we introduce ATL-MR, a tool to automatically distribute the execution of model transformations written in a popular MT language, ATL, on top of a well-known distributed programming model, MapReduce. We briefly present an overview of our approach, we describe the changes with respect to the standard ATL transformation engine, finally, we experimentally show the scalability of this solution. },
note = {Pittsburgh, PA, USA},
keywords = {ATL, Distributed Computing, MapReduce, Model Transformation (MT), Tool},
pubstate = {published},
tppubtype = {conference}
}
The Model-Driven Engineering (MDE) paradigm has been successfully embraced for manufacturing maintainable software in several domains while decreasing costs and efforts. One of its principal concepts is rule-based Model Transformation (MT) that enables an automated processing of models for different intentions. The user-friendly syntax of MT languages is designed for allowing users to specify and execute these operations in an effortless manner. Existing MT engines, however, are incapable of accomplishing transformation operations in an acceptable time while facing complex transformations. Worse, against large amount of data, these tools crash throwing an out of memory exception. In this paper, we introduce ATL-MR, a tool to automatically distribute the execution of model transformations written in a popular MT language, ATL, on top of a well-known distributed programming model, MapReduce. We briefly present an overview of our approach, we describe the changes with respect to the standard ATL transformation engine, finally, we experimentally show the scalability of this solution. |
ConferenceAmine Benelallam, Abel Gómez, Massimo Tisi, Jordi Cabot Distributed Model-to-model Transformation with ATL on MapReduce Proceedings of the 2015 ACM SIGPLAN International Conference on Software Language Engineering, SLE 2015 ACM, New York, NY, USA, 2015, ISBN: 978-1-4503-3686-4, (Pittsburgh, PA, USA). Abstract | Links | BibTeX | Tags: ATL, Distributed Computing, Language Engineering, MapReduce, Model Transformation (MT) @conference{Benelallam:SLE:2015,
title = {Distributed Model-to-model Transformation with ATL on MapReduce},
author = {Amine Benelallam and Abel G\'{o}mez and Massimo Tisi and Jordi Cabot},
doi = {10.1145/2814251.2814258},
isbn = {978-1-4503-3686-4},
year = {2015},
date = {2015-01-01},
booktitle = {Proceedings of the 2015 ACM SIGPLAN International Conference on Software Language Engineering},
pages = {37--48},
publisher = {ACM},
address = {New York, NY, USA},
series = {SLE 2015},
abstract = {Efficient processing of very large models is a key requirement for the adoption of Model-Driven Engineering (MDE) in some industrial contexts. One of the central operations in MDE is rule-based model transformation (MT). It is used to specify manipulation operations over structured data coming in the form of model graphs. However, being based on computationally expensive operations like subgraph isomorphism, MT tools are facing issues on both memory occupancy and execution time while dealing with the increasing model size and complexity. One way to overcome these issues is to exploit the wide availability of distributed clusters in the Cloud for the distributed execution of MT. In this paper, we propose an approach to automatically distribute the execution of model transformations written in a popular MT language, ATL, on top of a well-known distributed programming model, MapReduce. We show how the execution semantics of ATL can be aligned with the MapReduce computation model. We describe the extensions to the ATL transformation engine to enable distribution, and we experimentally demonstrate the scalability of this solution in a reverse-engineering scenario. },
note = {Pittsburgh, PA, USA},
keywords = {ATL, Distributed Computing, Language Engineering, MapReduce, Model Transformation (MT)},
pubstate = {published},
tppubtype = {conference}
}
Efficient processing of very large models is a key requirement for the adoption of Model-Driven Engineering (MDE) in some industrial contexts. One of the central operations in MDE is rule-based model transformation (MT). It is used to specify manipulation operations over structured data coming in the form of model graphs. However, being based on computationally expensive operations like subgraph isomorphism, MT tools are facing issues on both memory occupancy and execution time while dealing with the increasing model size and complexity. One way to overcome these issues is to exploit the wide availability of distributed clusters in the Cloud for the distributed execution of MT. In this paper, we propose an approach to automatically distribute the execution of model transformations written in a popular MT language, ATL, on top of a well-known distributed programming model, MapReduce. We show how the execution semantics of ATL can be aligned with the MapReduce computation model. We describe the extensions to the ATL transformation engine to enable distribution, and we experimentally demonstrate the scalability of this solution in a reverse-engineering scenario. |
ConferenceA. Ezzat Labib, M. Carmen Penadés, José H. Canós, Abel Gómez Enforcing Reuse and Customization in the Development of Learning Objects: A Product Line Approach Proceedings of the 30th Annual ACM Symposium on Applied Computing, SAC '15 ACM, New York, NY, USA , 2015, ISBN: 978-1-4503-3196-8, (Salamanca, Spain). Abstract | Links | BibTeX | Tags: Authoring Tool, Document Product Line (DPL), e-learning, Feature Modeling (FM), Learning Object, Software Product Lines (SPL) @conference{Labib:SAC:2015,
title = {Enforcing Reuse and Customization in the Development of Learning Objects: A Product Line Approach},
author = {A. Ezzat Labib and M. Carmen Penad\'{e}s and Jos\'{e} H. Can\'{o}s and Abel G\'{o}mez},
doi = {10.1145/2695664.2695991},
isbn = {978-1-4503-3196-8},
year = {2015},
date = {2015-01-01},
booktitle = {Proceedings of the 30th Annual ACM Symposium on Applied Computing},
pages = {261--263},
publisher = {ACM},
address = {New York, NY, USA },
series = {SAC '15},
abstract = {The growing use of information technologies in the educational cycles has raised new requirements for the development of Interactive Learning Materials in terms of content reuse, customization, and ease of creation and efficiency of production. In practical terms, the goal is the development of tools for creating reusable, granular, durable, and interoperable learning objects, and to compose such objects into meaningful courseware pieces. Current learning object development tools require special technical skills in the instructors to exploit reuse and customization features, leading sometimes to unsatisfactory user experiences.
In this paper, we explore a new way to reuse and customization following Product Line Engineering principles and tools. We have applied product line-based document engineering tools to create the so-called Learning Object Authoring Tool (LOAT), which supports the development of learning materials following the Cisco's Reusable Information Object strategy. We describe the principles behind LOAT, outline its design, and give clues about how it may be used by instructors to create learning objects in their own disciplines.},
note = {Salamanca, Spain},
keywords = {Authoring Tool, Document Product Line (DPL), e-learning, Feature Modeling (FM), Learning Object, Software Product Lines (SPL)},
pubstate = {published},
tppubtype = {conference}
}
The growing use of information technologies in the educational cycles has raised new requirements for the development of Interactive Learning Materials in terms of content reuse, customization, and ease of creation and efficiency of production. In practical terms, the goal is the development of tools for creating reusable, granular, durable, and interoperable learning objects, and to compose such objects into meaningful courseware pieces. Current learning object development tools require special technical skills in the instructors to exploit reuse and customization features, leading sometimes to unsatisfactory user experiences.
In this paper, we explore a new way to reuse and customization following Product Line Engineering principles and tools. We have applied product line-based document engineering tools to create the so-called Learning Object Authoring Tool (LOAT), which supports the development of learning materials following the Cisco's Reusable Information Object strategy. We describe the principles behind LOAT, outline its design, and give clues about how it may be used by instructors to create learning objects in their own disciplines. |
2014
|
ConferenceAbel Gómez, Pau Martí, M. Carmen Penadés, José H. Canós DPLfw: A Framework for the Product-Line-Based Generation of Variable Content Documents Proceedings of the Demonstrations Track of the ACM/IEEE 17th International Conference on Model Driven Engineering Languages and Systems (MODELS 2014) , vol. 1255 , CEUR Workshop Proceedings, Valencia, Spain, 2014, ISBN: 1613-0073. Abstract | Links | BibTeX | Tags: Document Generation, Document Product Line (DPL), DPLfw, Feature Modeling (FM), Software Product Lines (SPL), Variable Data Printing (VDP) @conference{Gomez:MODELS:2014,
title = {DPLfw: A Framework for the Product-Line-Based Generation of Variable Content Documents},
author = {Abel G\'{o}mez and Pau Mart\'{i} and M. Carmen Penad\'{e}s and Jos\'{e} H. Can\'{o}s },
editor = {Tao Yue and Benoit Combemale},
url = {http://ceur-ws.org/Vol-1255/paper2.pdf},
isbn = {1613-0073},
year = {2014},
date = {2014-09-24},
booktitle = {Proceedings of the Demonstrations Track of the ACM/IEEE 17th International Conference on Model Driven Engineering Languages and Systems (MODELS 2014) },
volume = {1255 },
publisher = {CEUR Workshop Proceedings},
address = {Valencia, Spain},
abstract = {Document Product Lines (DPL) is a document engineering methodology that applies product-line engineering principles to the generation of documents in high variability contexts and with high reuse of components. Instead of standalone documents, DPL promotes the definition of families of documents where the members share some common content while differ in other parts. The key for the definition is the availability of a collection of content assets which can be parameterized and instantiated at document generation time.
In this demonstration, we show the features of the DPL framework (DPLfw), the tool that supports DPL. DPLfw implements the domain engineering and application engineering stages of typical product line engineering approaches, supports different asset repositories, and generates customized documents in different output formats. We use the case study of the generation of customized emergency plans in a University campus [http://youtu.be/ueKGfmfkyI0].},
keywords = {Document Generation, Document Product Line (DPL), DPLfw, Feature Modeling (FM), Software Product Lines (SPL), Variable Data Printing (VDP)},
pubstate = {published},
tppubtype = {conference}
}
Document Product Lines (DPL) is a document engineering methodology that applies product-line engineering principles to the generation of documents in high variability contexts and with high reuse of components. Instead of standalone documents, DPL promotes the definition of families of documents where the members share some common content while differ in other parts. The key for the definition is the availability of a collection of content assets which can be parameterized and instantiated at document generation time.
In this demonstration, we show the features of the DPL framework (DPLfw), the tool that supports DPL. DPLfw implements the domain engineering and application engineering stages of typical product line engineering approaches, supports different asset repositories, and generates customized documents in different output formats. We use the case study of the generation of customized emergency plans in a University campus [http://youtu.be/ueKGfmfkyI0]. Open Access |
ConferenceM. Carmen Penadés, Pau Martí, José H. Canós, Abel Gómez Product Line-Based Customization of e-Government Documents Posters, Demos, Late-breaking Results and Workshop Proceedings of the 22nd Conference on User Modeling, Adaptation, and Personalization co-located with the 22nd Conference on User Modeling, Adaptation, and Personalization (UMAP2014), vol. 1181 , CEUR Workshop Proceedings, Aalborg, Denmark, 2014, ISBN: 1613-0073. Abstract | Links | BibTeX | Tags: Document Generation, Document Product Line (DPL), DPLfw, Feature Modeling (FM), Personalized e-Government Services, Software Product Lines (SPL) @conference{Penades:PEGOV:2014,
title = {Product Line-Based Customization of e-Government Documents},
author = {M. Carmen Penad\'{e}s and Pau Mart\'{i} and Jos\'{e} H. Can\'{o}s and Abel G\'{o}mez},
editor = {Iv\'{a}n Cantador and Min Chi and Rosta Farzan and Robert J\"{a}schke},
url = {http://ceur-ws.org/Vol-1181/pegov2014_paper_04.pdf},
isbn = {1613-0073},
year = {2014},
date = {2014-06-27},
booktitle = {Posters, Demos, Late-breaking Results and Workshop Proceedings of the 22nd Conference on User Modeling, Adaptation, and Personalization co-located with the 22nd Conference on User Modeling, Adaptation, and Personalization (UMAP2014)},
volume = {1181 },
pages = {38--47},
publisher = {CEUR Workshop Proceedings},
address = {Aalborg, Denmark},
abstract = {Content personalization has been one of the major trends in recent Document Engineering Research. The “one document for n users” paradigm is being replaced by the “one user, one document” model, where the content to be delivered to a particular user is generated by some means. This is a very promising approach for e-Government, where personalized government services, including document generation, are more and more required by users. In this paper, we introduce a method to the generation of personalized documents called Document Product Lines (DPL). DPL allows generating content in domains with high variability and with high levels of reuse. We describe the basic principles underlying DPL and show its application to the e-Government field using the personalized tax statement as case study. },
keywords = {Document Generation, Document Product Line (DPL), DPLfw, Feature Modeling (FM), Personalized e-Government Services, Software Product Lines (SPL)},
pubstate = {published},
tppubtype = {conference}
}
Content personalization has been one of the major trends in recent Document Engineering Research. The “one document for n users” paradigm is being replaced by the “one user, one document” model, where the content to be delivered to a particular user is generated by some means. This is a very promising approach for e-Government, where personalized government services, including document generation, are more and more required by users. In this paper, we introduce a method to the generation of personalized documents called Document Product Lines (DPL). DPL allows generating content in domains with high variability and with high levels of reuse. We describe the basic principles underlying DPL and show its application to the e-Government field using the personalized tax statement as case study. Open Access |
ConferenceJosé H. Canós, Juan Sánchez-Díaz, Vicent Orts, M. Carmen Penadés, Abel Gómez, Marcos R.S. Borges Turning emergency plans into executable artifacts ISCRAM 2014 Conference Proceedings – 11th International Conference on Information Systems for Crisis Response and Management, The Pennsylvania State University, University Park, PA, USA, 2014, ISBN: 978-0-692-21194-6. Abstract | Links | BibTeX | Tags: Digital Object Architecture (DOA), Emergency Plans Development and Improvement, Knowledge Intensive Workflow, SAGA @conference{Canos:ISCRAM:2014,
title = {Turning emergency plans into executable artifacts},
author = {Jos\'{e} H. Can\'{o}s and Juan S\'{a}nchez-D\'{i}az and Vicent Orts and M. Carmen Penad\'{e}s and Abel G\'{o}mez and Marcos R.S. Borges},
editor = {Starr Roxanne Hiltz and Mark S. Pfaff and Linda Plotnick and Patrick C. Shih},
url = {http://idl.iscram.org/files/canos-cerda/2014/367_Canos-Cerda_etal2014.pdf},
isbn = {978-0-692-21194-6},
year = {2014},
date = {2014-05-19},
booktitle = {ISCRAM 2014 Conference Proceedings \textendash 11th International Conference on Information Systems for Crisis Response and Management},
pages = {498--502},
publisher = {The Pennsylvania State University},
address = {University Park, PA, USA},
abstract = {On the way to the improvement of Emergency Plans, we show how a structured specification of the response procedures allows transforming static plans into dynamic, executable entities that can drive the way different actors participate in crisis responses. Additionally, the execution of plans requires the definition of information access mechanisms allowing execution engines to provide an actor with all the information resources he or she needs to accomplish a response task. We describe work in progress to improve the SAGA's Plan definition Module and Plan Execution Engine to support information-rich plan execution.},
keywords = {Digital Object Architecture (DOA), Emergency Plans Development and Improvement, Knowledge Intensive Workflow, SAGA},
pubstate = {published},
tppubtype = {conference}
}
On the way to the improvement of Emergency Plans, we show how a structured specification of the response procedures allows transforming static plans into dynamic, executable entities that can drive the way different actors participate in crisis responses. Additionally, the execution of plans requires the definition of information access mechanisms allowing execution engines to provide an actor with all the information resources he or she needs to accomplish a response task. We describe work in progress to improve the SAGA's Plan definition Module and Plan Execution Engine to support information-rich plan execution. Open Access |
Journal ArticleAbel Gómez, M. Carmen Penadés, José H. Canós, Marcos R.S. Borges, Manuel Llavador A framework for variable content document generation with multiple actors In: Information and Software Technology, vol. 56, no. 9, pp. 1101 - 1121, 2014, ISSN: 0950-5849, (Special Sections from “Asia-Pacific Software Engineering Conference (APSEC), 2012” and “ Software Product Line conference (SPLC), 2012”). Abstract | Links | BibTeX | Tags: Document Generation, Document Product Line (DPL), Document Workflow, DPLfw, Feature Modeling (FM), Model-Driven Engineering (MDE), Software Product Lines (SPL), Variable Data Printing (VDP) @article{Gomez:IST:2014,
title = {A framework for variable content document generation with multiple actors},
author = {Abel G\'{o}mez and M. Carmen Penad\'{e}s and Jos\'{e} H. Can\'{o}s and Marcos R.S. Borges and Manuel Llavador},
doi = {10.1016/j.infsof.2013.12.006},
issn = {0950-5849},
year = {2014},
date = {2014-01-01},
journal = {Information and Software Technology},
volume = {56},
number = {9},
pages = {1101 - 1121},
abstract = {Context
Advances in customization have highlighted the need for tools supporting variable content document management and generation in many domains. Current tools allow the generation of highly customized documents that are variable in both content and layout. However, most frameworks are technology-oriented, and their use requires advanced skills in implementation-related tools, which means their use by end users (i.e. document designers) is severely limited.
Objective
Starting from past and current trends for customized document authoring, our goal is to provide a document generation alternative in which variants are specified at a high level of abstraction and content reuse can be maximized in high variability scenarios.
Method
Based on our experience in Document Engineering, we identified areas in the variable content document management and generation field open to further improvement. We first classified the primary sources of variability in document composition processes and then developed a methodology, which we called DPL \textendash based on Software Product Lines principles \textendash to support document generation in high variability scenarios.
Results
In order to validate the applicability of our methodology we implemented a tool \textendash DPLfw \textendash to carry out DPL processes. After using this in different scenarios, we compared our proposal with other state-of-the-art tools for variable content document management and generation.
Conclusion
The DPLfw showed a good capacity for the automatic generation of variable content documents equal to or in some cases surpassing other currently available approaches. To the best of our knowledge, DPLfw is the only framework that combines variable content and document workflow facilities, easing the generation of variable content documents in which multiple actors play different roles.},
note = {Special Sections from “Asia-Pacific Software Engineering Conference (APSEC), 2012” and “ Software Product Line conference (SPLC), 2012”},
keywords = {Document Generation, Document Product Line (DPL), Document Workflow, DPLfw, Feature Modeling (FM), Model-Driven Engineering (MDE), Software Product Lines (SPL), Variable Data Printing (VDP)},
pubstate = {published},
tppubtype = {article}
}
Context
Advances in customization have highlighted the need for tools supporting variable content document management and generation in many domains. Current tools allow the generation of highly customized documents that are variable in both content and layout. However, most frameworks are technology-oriented, and their use requires advanced skills in implementation-related tools, which means their use by end users (i.e. document designers) is severely limited.
Objective
Starting from past and current trends for customized document authoring, our goal is to provide a document generation alternative in which variants are specified at a high level of abstraction and content reuse can be maximized in high variability scenarios.
Method
Based on our experience in Document Engineering, we identified areas in the variable content document management and generation field open to further improvement. We first classified the primary sources of variability in document composition processes and then developed a methodology, which we called DPL – based on Software Product Lines principles – to support document generation in high variability scenarios.
Results
In order to validate the applicability of our methodology we implemented a tool – DPLfw – to carry out DPL processes. After using this in different scenarios, we compared our proposal with other state-of-the-art tools for variable content document management and generation.
Conclusion
The DPLfw showed a good capacity for the automatic generation of variable content documents equal to or in some cases surpassing other currently available approaches. To the best of our knowledge, DPLfw is the only framework that combines variable content and document workflow facilities, easing the generation of variable content documents in which multiple actors play different roles. |
ConferenceAmine Benelallam, Abel Gómez, Gerson Sunyé, Massimo Tisi, David Launay Neo4EMF, A Scalable Persistence Layer for EMF Models Modelling Foundations and Applications: 10th European Conference, ECMFA 2014, Held as Part of STAF 2014, York, UK, July 21-25, 2014. Proceedings, vol. 8569, Springer International Publishing, 2014, ISBN: 978-3-319-09195-2, (York, UK). Abstract | Links | BibTeX | Tags: Graph Databases, Model Persistence, NeoEMF, Very Large Models (VLMs) @conference{Benelallam:ECMFA:2014,
title = {Neo4EMF, A Scalable Persistence Layer for EMF Models},
author = {Amine Benelallam and Abel G\'{o}mez and Gerson Suny\'{e} and Massimo Tisi and David Launay},
editor = {Jordi Cabot and Julia Rubin},
doi = {10.1007/978-3-319-09195-2_15},
isbn = {978-3-319-09195-2},
year = {2014},
date = {2014-01-01},
booktitle = {Modelling Foundations and Applications: 10th European Conference, ECMFA 2014, Held as Part of STAF 2014, York, UK, July 21-25, 2014. Proceedings},
volume = {8569},
pages = {230--241},
publisher = {Springer International Publishing},
abstract = {Several industrial contexts require software engineering methods and tools able to handle large-size artifacts. The central idea of abstraction makes model-driven engineering (MDE) a promising approach in such contexts, but current tools do not scale to very large models (VLMs): already the task of storing and accessing VLMs from a persisting support is currently inefficient. In this paper we propose a scalable persistence layer for the de-facto standard MDE framework EMF. The layer exploits the efficiency of graph databases in storing and accessing graph structures, as EMF models are. A preliminary experimentation shows that typical queries in reverse-engineering EMF models have good performance on such persistence layer, compared to file-based backends.},
note = {York, UK},
keywords = {Graph Databases, Model Persistence, NeoEMF, Very Large Models (VLMs)},
pubstate = {published},
tppubtype = {conference}
}
Several industrial contexts require software engineering methods and tools able to handle large-size artifacts. The central idea of abstraction makes model-driven engineering (MDE) a promising approach in such contexts, but current tools do not scale to very large models (VLMs): already the task of storing and accessing VLMs from a persisting support is currently inefficient. In this paper we propose a scalable persistence layer for the de-facto standard MDE framework EMF. The layer exploits the efficiency of graph databases in storing and accessing graph structures, as EMF models are. A preliminary experimentation shows that typical queries in reverse-engineering EMF models have good performance on such persistence layer, compared to file-based backends. |
2013
|
ConferenceJosé H. Canós, M. Carmen Penadés, Marcos R.S. Borges, Abel Gómez A Product Line Approach to Customized Recipe Generation Proceedings of the 5th International Workshop on Multimedia for Cooking & Eating Activities, CEA '13 ACM, New York, NY, USA, 2013, ISBN: 978-1-4503-2392-5, (Barcelona, Spain). Abstract | Links | BibTeX | Tags: Document Generation, Document Product Line (DPL), DPLfw, Feature Modeling (FM), Recipe Generation, Software Product Lines (SPL), Variability Management @conference{Canos:CEA:2013,
title = {A Product Line Approach to Customized Recipe Generation},
author = {Jos\'{e} H. Can\'{o}s and M. Carmen Penad\'{e}s and Marcos R.S. Borges and Abel G\'{o}mez},
doi = {10.1145/2506023.2506036},
isbn = {978-1-4503-2392-5},
year = {2013},
date = {2013-10-21},
booktitle = {Proceedings of the 5th International Workshop on Multimedia for Cooking \& Eating Activities},
pages = {69--74},
publisher = {ACM},
address = {New York, NY, USA},
series = {CEA '13},
abstract = {Document Product Lines (DPL) is an approach to variable content document generation based on the definition of document families that share parts of common content while differ in others. Following principles of the Software Product Line Engineering, the production of the different documents in a family is performed with a high degree of reuse of document components. In this paper, we have used DPL for the development of variable content recipe documents. We describe a flexible approach to recipe generation that allows the customization of recipe content in terms of different factors such as, user expertise, ingredients, and even delivery format.},
note = {Barcelona, Spain},
keywords = {Document Generation, Document Product Line (DPL), DPLfw, Feature Modeling (FM), Recipe Generation, Software Product Lines (SPL), Variability Management},
pubstate = {published},
tppubtype = {conference}
}
Document Product Lines (DPL) is an approach to variable content document generation based on the definition of document families that share parts of common content while differ in others. Following principles of the Software Product Line Engineering, the production of the different documents in a family is performed with a high degree of reuse of document components. In this paper, we have used DPL for the development of variable content recipe documents. We describe a flexible approach to recipe generation that allows the customization of recipe content in terms of different factors such as, user expertise, ingredients, and even delivery format. |
Journal ArticleJosé H. Canós, Marcos R.S. Borges, M. Carmen Penadés, Abel Gómez, Manuel Llavador Improving emergency plans management with SAGA In: Technological Forecasting and Social Change, vol. 80, no. 9, pp. 1868 - 1876, 2013, ISSN: 0040-1625, (Planning and Foresight Methodologies in Emergency Preparedness and Management). Abstract | Links | BibTeX | Tags: Emergency Management, Emergency Plans Development and Improvement, Information Systems (IS), SAGA @article{Canos:TFSC:2013,
title = {Improving emergency plans management with SAGA},
author = {Jos\'{e} H. Can\'{o}s and Marcos R.S. Borges and M. Carmen Penad\'{e}s and Abel G\'{o}mez and Manuel Llavador},
doi = {10.1016/j.techfore.2013.02.014},
issn = {0040-1625},
year = {2013},
date = {2013-01-01},
journal = {Technological Forecasting and Social Change},
volume = {80},
number = {9},
pages = {1868 - 1876},
abstract = {Emergency plans are the tangible result of the preparedness activities of the emergency management lifecycle. In many countries, public service organizations have the legal obligation to develop and maintain emergency plans covering all possible hazards relative to their areas of operation. However, little support is provided to planners in the development and use of plans. Often, advances in software technology have not been exploited, and plans remain as text documents whose accessibility is very limited. In this paper, we advocate for the definition and implementation of plan management processes as the first step to better produce and manage emergency plans. The main contribution of our work is to raise the need for IT-enabled planning environments, either at the national or organization-specific levels, which can lead to more uniform plans that are easier to evaluate and share, with support to stakeholders other than responders, among other advantages. To illustrate our proposal, we introduce SAGA, a framework that supports the full lifecycle of emergency plan management. SAGA provides all the actors involved in plan management with a number of tools to support all the stages of the plan lifecycle. We outline the architecture of the system, and show with a case study how planning processes can benefit from a system like SAGA.},
note = {Planning and Foresight Methodologies in Emergency Preparedness and Management},
keywords = {Emergency Management, Emergency Plans Development and Improvement, Information Systems (IS), SAGA},
pubstate = {published},
tppubtype = {article}
}
Emergency plans are the tangible result of the preparedness activities of the emergency management lifecycle. In many countries, public service organizations have the legal obligation to develop and maintain emergency plans covering all possible hazards relative to their areas of operation. However, little support is provided to planners in the development and use of plans. Often, advances in software technology have not been exploited, and plans remain as text documents whose accessibility is very limited. In this paper, we advocate for the definition and implementation of plan management processes as the first step to better produce and manage emergency plans. The main contribution of our work is to raise the need for IT-enabled planning environments, either at the national or organization-specific levels, which can lead to more uniform plans that are easier to evaluate and share, with support to stakeholders other than responders, among other advantages. To illustrate our proposal, we introduce SAGA, a framework that supports the full lifecycle of emergency plan management. SAGA provides all the actors involved in plan management with a number of tools to support all the stages of the plan lifecycle. We outline the architecture of the system, and show with a case study how planning processes can benefit from a system like SAGA. |
2012
|
ConferenceAbel Gómez, M. Carmen Penadés, José H. Canós Generación de Documentos con Contenido Variable en DPLfw Actas de las XVII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2012). Almería, septiembre de 2012., SISTEDES, 2012, ISBN: 978-84-15487-28-9. Abstract | Links | BibTeX | Tags: Darwin Information Typing Architecture (DITA), Document Product Line (DPL), DPLfw, Feature Modeling (FM), Model-Driven Engineering (MDE), Software Product Lines (SPL), Variable Data Printing (VDP) @conference{Gomez:JISBD:2012,
title = {Generaci\'{o}n de Documentos con Contenido Variable en DPLfw},
author = {Abel G\'{o}mez and M. Carmen Penad\'{e}s and Jos\'{e} H. Can\'{o}s},
editor = {Antonio Ru\'{i}z-Cort\'{e}s and Luis Iribarne},
url = {http://hdl.handle.net/11705/JISBD/2012/075},
isbn = {978-84-15487-28-9},
year = {2012},
date = {2012-09-17},
booktitle = {Actas de las XVII Jornadas de Ingenier\'{i}a del Software y Bases de Datos (JISBD 2012). Almer\'{i}a, septiembre de 2012.},
pages = {629-642},
publisher = {SISTEDES},
abstract = {Actualmente existen soluciones tecnol\'{o}gicas para la generaci\'{o}n de documentos personalizados en cuanto a sus contenidos y apariencia. Sin embargo, todas ellas requieren de amplios conocimientos en lenguajes especializados (XML, XSLT o XPATH entre otros) y no contemplan tareas relacionadas espec\'{i}ficamente con el dominio, como es la identificaci\'{o}n de la variabilidad en el contenido de los documentos. En este trabajo presentamos DPLfw, un entorno de trabajo basado en modelos para la generaci\'{o}n de documentos con contenido variable. DPLfw es una implementaci\'{o}n de la propuesta de L\'{i}neas de Producto de Documentos, donde la variabilidad en el contenido se representa mediante caracter\'{i}sticas, y la generaci\'{o}n de documentos se soporta sobre un proceso basado en L\'{i}neas de Productos. Este art\'{i}culo describe la arquitectura de DPLfw, a la vez que muestra su uso en la generaci\'{o}n de documentaci\'{o}n de usuario.},
keywords = {Darwin Information Typing Architecture (DITA), Document Product Line (DPL), DPLfw, Feature Modeling (FM), Model-Driven Engineering (MDE), Software Product Lines (SPL), Variable Data Printing (VDP)},
pubstate = {published},
tppubtype = {conference}
}
Actualmente existen soluciones tecnológicas para la generación de documentos personalizados en cuanto a sus contenidos y apariencia. Sin embargo, todas ellas requieren de amplios conocimientos en lenguajes especializados (XML, XSLT o XPATH entre otros) y no contemplan tareas relacionadas específicamente con el dominio, como es la identificación de la variabilidad en el contenido de los documentos. En este trabajo presentamos DPLfw, un entorno de trabajo basado en modelos para la generación de documentos con contenido variable. DPLfw es una implementación de la propuesta de Líneas de Producto de Documentos, donde la variabilidad en el contenido se representa mediante características, y la generación de documentos se soporta sobre un proceso basado en Líneas de Productos. Este artículo describe la arquitectura de DPLfw, a la vez que muestra su uso en la generación de documentación de usuario. Open AccessSpanish |
Conference M. Carmen Penadés, Abel Gómez, José H. Canós Deriving Document Workflows from Feature Models Proceedings of the 2012 ACM Symposium on Document Engineering, DocEng '12 ACM, New York, NY, USA, 2012, ISBN: 978-1-4503-1116-8, (Paris, France). Abstract | Links | BibTeX | Tags: Document Generation, Document Product Line (DPL), Document Workflow, DPLfw, Feature Modeling (FM), Organizational Model, Software Product Lines (SPL), Variable Data Printing (VDP) @conference{Penades:DocEng:2012,
title = {Deriving Document Workflows from Feature Models},
author = { M. Carmen Penad\'{e}s and Abel G\'{o}mez and Jos\'{e} H. Can\'{o}s},
doi = {10.1145/2361354.2361405},
isbn = {978-1-4503-1116-8},
year = {2012},
date = {2012-09-04},
booktitle = {Proceedings of the 2012 ACM Symposium on Document Engineering},
pages = {237--240},
publisher = {ACM},
address = {New York, NY, USA},
series = {DocEng '12},
abstract = {Despite the increasing interest in the Document Engineering community, a formal definition of document workflow is still to come. Often, the term refers to an abstract process consisting in a set of tasks to contribute to some document contents, and some techniques are being developed to support parts of these tasks rather than how to generate the process itself. In most proposals, these tasks are implicit in the business processes running in an organization, lacking an explicit document workflow model that could be analysed and enacted as a coherent unit. In this paper, we propose a document-centric approach to document workflow generation. We have extended the feature-based document meta-model of the Document Product Lines approach with an organizational metamodel. For a given configuration of the feature model, we assign tasks to different members of the organization to con-tribute to the document contents. Moreover, the relationships between features define an ordering of the tasks, which may be refined to produce a specification of the document workflow model automatically. The generation of customized software manuals is used to illustrate the proposal.},
note = {Paris, France},
keywords = {Document Generation, Document Product Line (DPL), Document Workflow, DPLfw, Feature Modeling (FM), Organizational Model, Software Product Lines (SPL), Variable Data Printing (VDP)},
pubstate = {published},
tppubtype = {conference}
}
Despite the increasing interest in the Document Engineering community, a formal definition of document workflow is still to come. Often, the term refers to an abstract process consisting in a set of tasks to contribute to some document contents, and some techniques are being developed to support parts of these tasks rather than how to generate the process itself. In most proposals, these tasks are implicit in the business processes running in an organization, lacking an explicit document workflow model that could be analysed and enacted as a coherent unit. In this paper, we propose a document-centric approach to document workflow generation. We have extended the feature-based document meta-model of the Document Product Lines approach with an organizational metamodel. For a given configuration of the feature model, we assign tasks to different members of the organization to con-tribute to the document contents. Moreover, the relationships between features define an ordering of the tasks, which may be refined to produce a specification of the document workflow model automatically. The generation of customized software manuals is used to illustrate the proposal. |
ConferenceAbel Gómez, M. Carmen Penadés, José H. Canós, Marcos R.S. Borges, Manuel Llavador DPLfw: A Framework for Variable Content Document Generation Proceedings of the 16th International Software Product Line Conference - Volume 1, SPLC '12 ACM, New York, NY, USA, 2012, ISBN: 978-1-4503-1094-9, (Salvador, Brazil). Abstract | Links | BibTeX | Tags: Darwin Information Typing Architecture (DITA), Document Product Line (DPL), DPLfw, Feature Modeling (FM), Model-Driven Engineering (MDE), Software Product Lines (SPL), Variable Data Printing (VDP) @conference{Gomez:SPLC:2012,
title = {DPLfw: A Framework for Variable Content Document Generation},
author = {Abel G\'{o}mez and M. Carmen Penad\'{e}s and Jos\'{e} H. Can\'{o}s and Marcos R.S. Borges and Manuel Llavador},
doi = {10.1145/2362536.2362552},
isbn = {978-1-4503-1094-9},
year = {2012},
date = {2012-09-02},
booktitle = {Proceedings of the 16th International Software Product Line Conference - Volume 1},
pages = {96--105},
publisher = {ACM},
address = {New York, NY, USA},
series = {SPLC '12},
abstract = {Variable Data Printing solutions provide means to generate documents whose content varies according to some criteria. Since the early Mail Merge-like applications that generated letters with destination data taken from databases, different languages and frameworks have been developed with increasing levels of sophistication. Current tools allow the generation of highly customized documents that are variable not only in content, but also in layout. However, most frameworks are technology-oriented, and their use requires high skills in implementation-related tools (XML, XPATH, and others), which do not include support for domain-related tasks like identification of document content variability.
In this paper, we introduce DPLfw, a framework for variable content document generation based on Software Product Line Engineering principles. It is an implementation of the Document Product Lines (DPL) approach, which was defined with the aim of supporting variable content document generation from a domain-oriented point of view. DPL models document content variability in terms of features, and product line-like processes support the generation of documents. We define the DPLfw architecture, and illustrate its use in the definition of variable-content emergency plans.},
note = {Salvador, Brazil},
keywords = {Darwin Information Typing Architecture (DITA), Document Product Line (DPL), DPLfw, Feature Modeling (FM), Model-Driven Engineering (MDE), Software Product Lines (SPL), Variable Data Printing (VDP)},
pubstate = {published},
tppubtype = {conference}
}
Variable Data Printing solutions provide means to generate documents whose content varies according to some criteria. Since the early Mail Merge-like applications that generated letters with destination data taken from databases, different languages and frameworks have been developed with increasing levels of sophistication. Current tools allow the generation of highly customized documents that are variable not only in content, but also in layout. However, most frameworks are technology-oriented, and their use requires high skills in implementation-related tools (XML, XPATH, and others), which do not include support for domain-related tasks like identification of document content variability.
In this paper, we introduce DPLfw, a framework for variable content document generation based on Software Product Line Engineering principles. It is an implementation of the Document Product Lines (DPL) approach, which was defined with the aim of supporting variable content document generation from a domain-oriented point of view. DPL models document content variability in terms of features, and product line-like processes support the generation of documents. We define the DPLfw architecture, and illustrate its use in the definition of variable-content emergency plans. |
ConferenceJosé H. Canós, M. Carmen Penadés, Abel Gómez, Marcos R.S. Borges SAGA: An Integrated Architecture for the Management of Advanced Emergency Plans ISCRAM 2012 Conference Proceedings – 9th International Conference on Information Systems for Crisis Response and Management, Simon Fraser University, Vancouver, Canada, 2012, ISBN: 978-0-86491-332-6 . Abstract | Links | BibTeX | Tags: Emergency Management, Emergency Plans Development and Improvement, Information Systems (IS), SAGA @conference{Canos:ISCRAM:2012,
title = {SAGA: An Integrated Architecture for the Management of Advanced Emergency Plans},
author = {Jos\'{e} H. Can\'{o}s and M. Carmen Penad\'{e}s and Abel G\'{o}mez and Marcos R.S. Borges},
editor = {Leon Rothkrantz and Jozef Ristvej and Zeno Franco},
url = {http://idl.iscram.org/files/canos/2012/88_Canos_etal2012.pdf},
isbn = {978-0-86491-332-6 },
year = {2012},
date = {2012-04-22},
booktitle = {ISCRAM 2012 Conference Proceedings \textendash 9th International Conference on Information Systems for Crisis Response and Management},
publisher = {Simon Fraser University},
address = {Vancouver, Canada},
abstract = {Despite the significant advances that software and hardware technologies have brought to the emergency management field, some islands remain where innovation has had little impact. Among them, emergency plan management is of particular relevance due to their key role in the direction of teams during responses. Aspects like coordination, collaboration, and others are spread in plain text sentences, impeding automatic tool support to improve team per-formance. Moreover, administrative management of plans becomes a mere document management activity. In this paper, we present SAGA, an architecture that supports the full lifecycle of advanced emergency plan management. By advanced we mean plans that include new types of interaction such as hypermedia and advanced process definition languages to provide precise specification of response procedures. SAGA provides all the actors involved in plan management a number of tools supporting all the stages of the plan lifecycle, from its creation to its use in training drills or actual responses. It is intended to be instantiated in systems promoted by civil defense agencies, providing administrative support to plan management; additionally, editing tools for plan designers and tools for analysis and improvement of such plans by organizations are provided. Plan enactment facilities in emergency response are also integrated. To our knowledge, it is the very first proposal that covers all the aspects of plan management. },
keywords = {Emergency Management, Emergency Plans Development and Improvement, Information Systems (IS), SAGA},
pubstate = {published},
tppubtype = {conference}
}
Despite the significant advances that software and hardware technologies have brought to the emergency management field, some islands remain where innovation has had little impact. Among them, emergency plan management is of particular relevance due to their key role in the direction of teams during responses. Aspects like coordination, collaboration, and others are spread in plain text sentences, impeding automatic tool support to improve team per-formance. Moreover, administrative management of plans becomes a mere document management activity. In this paper, we present SAGA, an architecture that supports the full lifecycle of advanced emergency plan management. By advanced we mean plans that include new types of interaction such as hypermedia and advanced process definition languages to provide precise specification of response procedures. SAGA provides all the actors involved in plan management a number of tools supporting all the stages of the plan lifecycle, from its creation to its use in training drills or actual responses. It is intended to be instantiated in systems promoted by civil defense agencies, providing administrative support to plan management; additionally, editing tools for plan designers and tools for analysis and improvement of such plans by organizations are provided. Plan enactment facilities in emergency response are also integrated. To our knowledge, it is the very first proposal that covers all the aspects of plan management. Open Access |
Journal ArticleMaría Eugenia Cabello, Isidro Ramos, Jorge Rafael Gutiérrez, Abel Gómez, Rogelio Limón SPL variability management, cardinality and types: an MDA approach In: International Journal of Intelligent Information and Database Systems (IJIIDS), vol. 6, no. 2, pp. 129-153, 2012, ISSN: 1751-5866. Abstract | Links | BibTeX | Tags: Expert Systems, Feature Modeling (FM), Intelligent Information, Metamodels, Model Transformation (MT), Model-Driven Architecture (MDA), Models, Query/View/Transformation (QVT), Software Product Line Production Plan, Software Product Lines (SPL), Variability Management @article{Cabello:IJIIDS:2012,
title = {SPL variability management, cardinality and types: an MDA approach},
author = {Mar\'{i}a Eugenia Cabello and Isidro Ramos and Jorge Rafael Guti\'{e}rrez and Abel G\'{o}mez and Rogelio Lim\'{o}n},
doi = {10.1504/IJIIDS.2012.045848},
issn = {1751-5866},
year = {2012},
date = {2012-03-14},
journal = {International Journal of Intelligent Information and Database Systems (IJIIDS)},
volume = {6},
number = {2},
pages = {129-153},
abstract = {This paper presents a baseline-oriented modelling (BOM) approach to develop families of software products. BOM is a generic solution implemented as a framework that automatically generates software applications using executable architectural models by means of software product line (SPL) techniques. In order to cope with the variability problem, BOM considers its cardinality and type and implements two solutions: the BOM-EAGER and the BOM-LAZY approaches. BOM has been designed following the model-driven architecture (MDA) standard: all the SPL software artefacts are models, and model transformations enact the SPL production plan.},
keywords = {Expert Systems, Feature Modeling (FM), Intelligent Information, Metamodels, Model Transformation (MT), Model-Driven Architecture (MDA), Models, Query/View/Transformation (QVT), Software Product Line Production Plan, Software Product Lines (SPL), Variability Management},
pubstate = {published},
tppubtype = {article}
}
This paper presents a baseline-oriented modelling (BOM) approach to develop families of software products. BOM is a generic solution implemented as a framework that automatically generates software applications using executable architectural models by means of software product line (SPL) techniques. In order to cope with the variability problem, BOM considers its cardinality and type and implements two solutions: the BOM-EAGER and the BOM-LAZY approaches. BOM has been designed following the model-driven architecture (MDA) standard: all the SPL software artefacts are models, and model transformations enact the SPL production plan. |
2011
|
ConferenceAbel Gómez, Isidro Ramos Automatic Tool Support for Cardinality-Based Feature Modeling with Model Constraints for Information Systems Development Information Systems Development: Business Systems and Services: Modeling and Development, Springer New York, New York, NY, 2011, ISBN: 978-1-4419-9790-6, (Prague, Czech Republic). Abstract | Links | BibTeX | Tags: Feature Modeling (FM), Model-Driven Engineering (MDE), Object Constraint Language (OCL), Query/View/Transformation (QVT), Software Product Lines (SPL) @conference{Gomez:ISD:2010,
title = {Automatic Tool Support for Cardinality-Based Feature Modeling with Model Constraints for Information Systems Development},
author = {Abel G\'{o}mez and Isidro Ramos},
editor = {Jaroslav Pokorny and Vaclav Repa and Karel Richta and Wita Wojtkowski and Henry Linger and Chris Barry and Michael Lang},
url = {https://abel.gomez.llana.me/wp-content/uploads/2017/11/gomez-isd-2010.pdf},
doi = {10.1007/978-1-4419-9790-6_22},
isbn = {978-1-4419-9790-6},
year = {2011},
date = {2011-09-01},
booktitle = {Information Systems Development: Business Systems and Services: Modeling and Development},
pages = {271--284},
publisher = {Springer New York},
address = {New York, NY},
abstract = {Feature Modeling is a technique that uses diagrams to characterize the variability of software product lines. The arrival of metamodeling frameworks in the Model-Driven Engineering field (MDE) has provided the necessary background to exploit these diagrams (called feature models) in information systems development processes. However, these frameworks have some limitations when they must deal with software artifacts at several abstraction layers. This paper presents a prototype that allows the developers to define cardinality-based feature models with complex model constraints. The prototype uses model transformations to build Domain Variability Models (DVM) that can be instantiated. This proposal permits us to take advantage of existing tools to validate model instances and finally to automatically generate code. Moreover, DVMs can play a key role in complex MDE processes automating the use of feature models in software product lines.},
note = {Prague, Czech Republic},
keywords = {Feature Modeling (FM), Model-Driven Engineering (MDE), Object Constraint Language (OCL), Query/View/Transformation (QVT), Software Product Lines (SPL)},
pubstate = {published},
tppubtype = {conference}
}
Feature Modeling is a technique that uses diagrams to characterize the variability of software product lines. The arrival of metamodeling frameworks in the Model-Driven Engineering field (MDE) has provided the necessary background to exploit these diagrams (called feature models) in information systems development processes. However, these frameworks have some limitations when they must deal with software artifacts at several abstraction layers. This paper presents a prototype that allows the developers to define cardinality-based feature models with complex model constraints. The prototype uses model transformations to build Domain Variability Models (DVM) that can be instantiated. This proposal permits us to take advantage of existing tools to validate model instances and finally to automatically generate code. Moreover, DVMs can play a key role in complex MDE processes automating the use of feature models in software product lines. Full Text AvailablePreprint |
Journal ArticleBeatriz Mora, Félix García, Francisco Ruiz, Mario Piattini, Artur Boronat, Abel Gómez, José Á. Carsí, Isidro Ramos Software Generic Measurement Framework Based on MDA In: IEEE Latin America Transactions, vol. 9, no. 1, pp. 864-871, 2011, ISSN: 1548-0992. Abstract | Links | BibTeX | Tags: Framework for the Modeling and Evaluation of Software Processes (FMESP), Measurement, Model-Driven Architecture (MDA), MOMENT, Query/View/Transformation (QVT) @article{Mora:IEEELatAm:2011,
title = {Software Generic Measurement Framework Based on MDA},
author = {Beatriz Mora and F\'{e}lix Garc\'{i}a and Francisco Ruiz and Mario Piattini and Artur Boronat and Abel G\'{o}mez and Jos\'{e} \'{A}. Cars\'{i} and Isidro Ramos},
doi = {10.1109/TLA.2011.5876432},
issn = {1548-0992},
year = {2011},
date = {2011-03-01},
journal = {IEEE Latin America Transactions},
volume = {9},
number = {1},
pages = {864-871},
abstract = {Currently, in order to obtain high quality software products it is necessary to carry out a good software processes management in which measurement is a fundamental factor. Due to the great diversity of entities involved in software measurement, a consistent framework to integrate the different entities in the measurement process is required. In this paper the Software Measurement Framework (SMF) is presented, which supports the measurement of any type of software entity through the metamodels which depict them. In this framework, any software entity in any domain could be measured with a common Software Measurement metamodel and by means of QVT transformations. This work explains the three fundamental elements of the Software Measurement Framework (conceptual architecture, technological aspects and method). These elements have all been adapted to the MDE paradigm and to MDA technology, taking advantage of their benefits within the field of software measurement. Furthermore an example which illustrates the framework's application to a concrete domain is furthermore shown.},
keywords = {Framework for the Modeling and Evaluation of Software Processes (FMESP), Measurement, Model-Driven Architecture (MDA), MOMENT, Query/View/Transformation (QVT)},
pubstate = {published},
tppubtype = {article}
}
Currently, in order to obtain high quality software products it is necessary to carry out a good software processes management in which measurement is a fundamental factor. Due to the great diversity of entities involved in software measurement, a consistent framework to integrate the different entities in the measurement process is required. In this paper the Software Measurement Framework (SMF) is presented, which supports the measurement of any type of software entity through the metamodels which depict them. In this framework, any software entity in any domain could be measured with a common Software Measurement metamodel and by means of QVT transformations. This work explains the three fundamental elements of the Software Measurement Framework (conceptual architecture, technological aspects and method). These elements have all been adapted to the MDE paradigm and to MDA technology, taking advantage of their benefits within the field of software measurement. Furthermore an example which illustrates the framework's application to a concrete domain is furthermore shown. |
2010
|
ConferenceElena Navarro, Abel Gómez, Patricio Letelier, Isidro Ramos MORPHEUS: A Supporting Tool for MDD Information Systems Development: Asian Experiences, Springer New York, New York, NY, USA, 2010, ISBN: 978-1-4419-7355-9, (Nanchang, China). Abstract | Links | BibTeX | Tags: Model-Driven Development (MDD), Model-Driven Engineering (MDE), MORPHEUS, Requirements Engineering (RE), Software Architectures @conference{Navarro:ISD:2009,
title = {MORPHEUS: A Supporting Tool for MDD},
author = {Elena Navarro and Abel G\'{o}mez and Patricio Letelier and Isidro Ramos},
editor = {William Wei Song and Shenghua Xu and Changxuan Wan and Yuansheng Zhong and Wita Wojtkowski and Gregory Wojtkowski and Henry Linger},
url = {https://abel.gomez.llana.me/wp-content/uploads/2017/11/navarro-isd-2009.pdf},
doi = {10.1007/978-1-4419-7355-9_22},
isbn = {978-1-4419-7355-9},
year = {2010},
date = {2010-10-30},
booktitle = {Information Systems Development: Asian Experiences},
pages = {255--267},
publisher = {Springer New York},
address = {New York, NY, USA},
abstract = {Model-driven development (MDD) approach is gaining more and more attention both from practitioners and academics because of its positive influences in terms of reliability and productivity in the software development process. ATRIUM is one of the current proposals following the MDD principles as the development is driven by models and a tool, MORPHEUS, supports both its activities and models. This tool provides facilities for modelling, metamodelling, and analysis and integrates an engine to execute transformations. In this work, this tool is presented describing both its architecture and its capabilities.},
note = {Nanchang, China},
keywords = {Model-Driven Development (MDD), Model-Driven Engineering (MDE), MORPHEUS, Requirements Engineering (RE), Software Architectures},
pubstate = {published},
tppubtype = {conference}
}
Model-driven development (MDD) approach is gaining more and more attention both from practitioners and academics because of its positive influences in terms of reliability and productivity in the software development process. ATRIUM is one of the current proposals following the MDD principles as the development is driven by models and a tool, MORPHEUS, supports both its activities and models. This tool provides facilities for modelling, metamodelling, and analysis and integrates an engine to execute transformations. In this work, this tool is presented describing both its architecture and its capabilities. Full Text AvailablePreprint |
ConferenceAbel Gómez, María Eugenia Cabello, Isidro Ramos BOM-Lazy: A Variability-Driven Framework for Software Applications Production Using Model Transformation Techniques Software Product Lines - 14th International Conference, SPLC 2010, Jeju Island, South Korea, September 13-17, 2010. Workshop Proceedings (Volume 2 : Workshops, Industrial Track, Doctoral Symposium, Demonstrations and Tools), Lancaster University, Lancaster, United Kingdom, 2010, ISBN: 978-1-86220-274-0. Abstract | Links | BibTeX | Tags: BOM-Lazy, Expert Systems, Feature Modeling (FM), Model Transformation (MT), Query/View/Transformation (QVT), Software Architectures, Software Product Lines (SPL), Variability Management @conference{Gomez:SPLC:2010,
title = {BOM-Lazy: A Variability-Driven Framework for Software Applications Production Using Model Transformation Techniques},
author = {Abel G\'{o}mez and Mar\'{i}a Eugenia Cabello and Isidro Ramos },
editor = {Goetz Botterweck and Stan Jarzabek and Tomoji Kishi and Jaejoon Lee and Steve Livengood},
url = {https://abel.gomez.llana.me/wp-content/uploads/2017/11/gomez-splc-2010.pdf},
isbn = {978-1-86220-274-0},
year = {2010},
date = {2010-09-13},
booktitle = {Software Product Lines - 14th International Conference, SPLC 2010, Jeju Island, South Korea, September 13-17, 2010. Workshop Proceedings (Volume 2 : Workshops, Industrial Track, Doctoral Symposium, Demonstrations and Tools)},
pages = {139--146},
publisher = {Lancaster University},
address = {Lancaster, United Kingdom},
crossref = {DBLP:conf/splc/2010w},
abstract = {This paper presents Baseline Oriented Modeling\textendashLazy (BOM\textendashLazy): an approach to develop applications in a domain, Expert Systems, by means of Software Product Lines and model transformations techniques. A domain analysis has been done on the variability of Expert Systems that perform diagnostic tasks in order to determine the general and individual features, (i.e. common and variants features) of these systems. The variability of our Software Product Line is managed by means of models and model transformations; and the production plan is automatically generated and driven by the variability model and the core assets (which take part in the reference architecture) of the domain, in order to produce the base architecture of the Software Product Line.},
keywords = {BOM-Lazy, Expert Systems, Feature Modeling (FM), Model Transformation (MT), Query/View/Transformation (QVT), Software Architectures, Software Product Lines (SPL), Variability Management},
pubstate = {published},
tppubtype = {conference}
}
This paper presents Baseline Oriented Modeling–Lazy (BOM–Lazy): an approach to develop applications in a domain, Expert Systems, by means of Software Product Lines and model transformations techniques. A domain analysis has been done on the variability of Expert Systems that perform diagnostic tasks in order to determine the general and individual features, (i.e. common and variants features) of these systems. The variability of our Software Product Line is managed by means of models and model transformations; and the production plan is automatically generated and driven by the variability model and the core assets (which take part in the reference architecture) of the domain, in order to produce the base architecture of the Software Product Line. Full Text AvailablePreprint |
ConferenceAbel Gómez, Isidro Ramos Cardinality-Based Feature Modeling and Model-Driven Engineering: Fitting them Together Fourth International Workshop on Variability Modelling of Software-intensive Systems – Proceedings, no. 37, VaMoS 2010 Institut für Informatik und Wirtschaftsinformatik (ICB) ICB Research Reports, Essen, Germany, 2010, ISSN: 1860‐2770, (Linz, Austria). Abstract | Links | BibTeX | Tags: Feature Modeling (FM), Model-Driven Architecture (MDA), Model-Driven Engineering (MDE), Object Constraint Language (OCL), Query/View/Transformation (QVT), Software Product Lines (SPL), Unified Modeling Language (UML) @conference{Gomez:VaMoS:2010,
title = {Cardinality-Based Feature Modeling and Model-Driven Engineering: Fitting them Together},
author = {Abel G\'{o}mez and Isidro Ramos},
editor = {David Benavides and Don Batory and Paul Gr\"{u}nbacher},
url = {http://www.wi-inf.uni-duisburg-essen.de/FGFrank/download/icb/ICBReportNo37.pdf},
issn = {1860‐2770},
year = {2010},
date = {2010-01-01},
booktitle = {Fourth International Workshop on Variability Modelling of Software-intensive Systems \textendash Proceedings},
number = {37},
publisher = {ICB Research Reports},
address = {Essen, Germany},
organization = {Institut f\"{u}r Informatik und Wirtschaftsinformatik (ICB)},
series = {VaMoS 2010},
abstract = {Feature Modeling is a technique which uses a specific visual notation to characterize the variability of product lines by means of diagrams. In this sense, the arrival of metamodeling frameworks in the Model-Driven Engineering field has provided the necessary background to exploit these diagrams (called feature models) in complex software development processes. However, these frameworks (such as the Eclipse Modeling Framework) have some limitations when they must deal with software artifacts at several abstraction layers. This paper presents a prototype that allows the developers to define cardinality-based feature models with constraints. These models are automatically translated to Domain Variability Models (DVM) by means of model-to-model transformations. Thus, such models can be instantiated, and each different instantiation is a configuration of the feature model. This appproach allows us to take advantage of existing generative programming tools, query languages and validation formalisms; and, what is more, DVMs can play a key role in MDE processes as they can be used as inputs in complex model transformations.
},
note = {Linz, Austria},
keywords = {Feature Modeling (FM), Model-Driven Architecture (MDA), Model-Driven Engineering (MDE), Object Constraint Language (OCL), Query/View/Transformation (QVT), Software Product Lines (SPL), Unified Modeling Language (UML)},
pubstate = {published},
tppubtype = {conference}
}
Feature Modeling is a technique which uses a specific visual notation to characterize the variability of product lines by means of diagrams. In this sense, the arrival of metamodeling frameworks in the Model-Driven Engineering field has provided the necessary background to exploit these diagrams (called feature models) in complex software development processes. However, these frameworks (such as the Eclipse Modeling Framework) have some limitations when they must deal with software artifacts at several abstraction layers. This paper presents a prototype that allows the developers to define cardinality-based feature models with constraints. These models are automatically translated to Domain Variability Models (DVM) by means of model-to-model transformations. Thus, such models can be instantiated, and each different instantiation is a configuration of the feature model. This appproach allows us to take advantage of existing generative programming tools, query languages and validation formalisms; and, what is more, DVMs can play a key role in MDE processes as they can be used as inputs in complex model transformations.
Open Access |
2009
|
ConferenceMaría Gómez, Abel Gómez, María Eugenia Cabello, Isidro Ramos BOM–Lazy: gestión de la variabilidad en el desarrollo de Sistemas Expertos mediante técnicas de MDA Actas del VI Taller sobre Desarrollo de Software Dirigido por Modelos (DSDM 2009), junto a XIV Jornadas de Ingeniería de Software y Bases de Datos (JISBD 2009), vol. 3, no. 9, SISTEDES, 2009, ISSN: 1988–3455, (San Sebastián, Spain). Abstract | Links | BibTeX | Tags: BOM-Lazy, Expert Systems, Expert Systems, Feature Modeling (FM), Model Transformation (MT), Model-Driven Architecture (MDA), Query/View/Transformation (QVT), Software Architectures, Software Product Lines (SPL), Variability Management @conference{Gomez:DSDM:2009,
title = {BOM\textendashLazy: gesti\'{o}n de la variabilidad en el desarrollo de Sistemas Expertos mediante t\'{e}cnicas de MDA},
author = {Mar\'{i}a G\'{o}mez and Abel G\'{o}mez and Mar\'{i}a Eugenia Cabello and Isidro Ramos},
editor = {Orlando Avila-Garc\'{i}a and Vicente Pelechano and Jos\'{e} Ra\'{u}l Romero},
url = {https://www.sistedes.es/files/actas-talleres-JISBD/Vol-3/No-2/DSDM09.pdf},
issn = {1988\textendash3455},
year = {2009},
date = {2009-09-08},
booktitle = {Actas del VI Taller sobre Desarrollo de Software Dirigido por Modelos (DSDM 2009), junto a XIV Jornadas de Ingenier\'{i}a de Software y Bases de Datos (JISBD 2009)},
volume = {3},
number = {9},
pages = {91--100},
publisher = {SISTEDES},
abstract = {Este documento presenta BOM\textendashLazy, una aproximaci\'{o}n para desarrollar Sistemas Expertos mediante la utilizaci\'{o}n de t\'{e}cnicas de Desarrollo de Software Dirigido por Modelos y L\'{i}neas de Producto Software. Se ha realizado un estudio sobre la variabilidad de los Sistemas Expertos para determinar las caracter\'{i}sticas generales y particulares de dicho dominio. La variabilidad de tal dominio se gestiona mediante una transformaci\'{o}n de modelos que permite obtener autom\'{a}ticamente diferentes arquitecturas base a partir de la arquitectura gen\'{e}rica de la L\'{i}nea de Productos Software. },
note = {San Sebasti\'{a}n, Spain},
keywords = {BOM-Lazy, Expert Systems, Expert Systems, Feature Modeling (FM), Model Transformation (MT), Model-Driven Architecture (MDA), Query/View/Transformation (QVT), Software Architectures, Software Product Lines (SPL), Variability Management},
pubstate = {published},
tppubtype = {conference}
}
Este documento presenta BOM–Lazy, una aproximación para desarrollar Sistemas Expertos mediante la utilización de técnicas de Desarrollo de Software Dirigido por Modelos y Líneas de Producto Software. Se ha realizado un estudio sobre la variabilidad de los Sistemas Expertos para determinar las características generales y particulares de dicho dominio. La variabilidad de tal dominio se gestiona mediante una transformación de modelos que permite obtener automáticamente diferentes arquitecturas base a partir de la arquitectura genérica de la Línea de Productos Software. Open AccessSpanish |
ConferenceMaría Eugenia Cabello, Isidro Ramos, Abel Gómez, Rogelio Limón Baseline-Oriented Modeling: An MDA Approach Based on Software Product Lines for the Expert Systems Development Intelligent Information and Database Systems, 2009. ACIIDS 2009. First Asian Conference on, IEEE Computer Society, 2009, ISBN: 978-0-7695-3580-7, (Dong Hoi, Vietnam). Abstract | Links | BibTeX | Tags: BOM-Lazy, Expert Systems, Feature Modeling (FM), Model Transformation (MT), Query/View/Transformation (QVT), Software Product Lines (SPL) @conference{Cabello:ACIIDS:2009,
title = {Baseline-Oriented Modeling: An MDA Approach Based on Software Product Lines for the Expert Systems Development},
author = {Mar\'{i}a Eugenia Cabello and Isidro Ramos and Abel G\'{o}mez and Rogelio Lim\'{o}n},
editor = {Ngoc Thanh Nguyen and Huynh Phan Nguyen and Adam Grzech},
doi = {10.1109/ACIIDS.2009.15},
isbn = {978-0-7695-3580-7},
year = {2009},
date = {2009-04-01},
booktitle = {Intelligent Information and Database Systems, 2009. ACIIDS 2009. First Asian Conference on},
pages = {208-213},
publisher = {IEEE Computer Society},
abstract = {This paper presents our baseline oriented modeling (BOM) approach. BOM is a framework that automatically generates software applications as PRISMA architectural models using model transformations and software product line techniques. We follow the model-driven architecture initiative building domain models which are automatically transformed into platform independent models, and then compiled to an executable application (i.e. platform specific models). In order to illustrate BOM, we focus on a specific domain: the diagnostic expert systems.},
note = {Dong Hoi, Vietnam},
keywords = {BOM-Lazy, Expert Systems, Feature Modeling (FM), Model Transformation (MT), Query/View/Transformation (QVT), Software Product Lines (SPL)},
pubstate = {published},
tppubtype = {conference}
}
This paper presents our baseline oriented modeling (BOM) approach. BOM is a framework that automatically generates software applications as PRISMA architectural models using model transformations and software product line techniques. We follow the model-driven architecture initiative building domain models which are automatically transformed into platform independent models, and then compiled to an executable application (i.e. platform specific models). In order to illustrate BOM, we focus on a specific domain: the diagnostic expert systems. |
2008
|
Journal ArticleAbel Gómez, Artur Boronat, José Á. Carsí, Isidro Ramos, Claudia Täubner, Silke Eckstein JISBD2007-03: Biological Data Processing using Model Driven Engineering In: IEEE Latin America Transactions, vol. 6, no. 4, pp. 324-331, 2008, ISSN: 1548-0992. Abstract | Links | BibTeX | Tags: Bioinformatics, Data Migration, Intergenomics, Model Driven Software Development (MDSD), Petri net (PN), Query/View/Transformation (QVT) @article{Gomez:IEEELatAm:2008,
title = {JISBD2007-03: Biological Data Processing using Model Driven Engineering},
author = {Abel G\'{o}mez and Artur Boronat and Jos\'{e} \'{A}. Cars\'{i} and Isidro Ramos and Claudia T\"{a}ubner and Silke Eckstein},
doi = {10.1109/TLA.2008.4815285},
issn = {1548-0992},
year = {2008},
date = {2008-08-01},
journal = {IEEE Latin America Transactions},
volume = {6},
number = {4},
pages = {324-331},
abstract = {This paper shows how model-driven software development (MDSD) can be applied in the bioinformatics field since biological data structures can be easily expressed by means of models. The existence of several heterogeneous data sources is usual in the bioinformatics context. In order to validate the information stored in these data sources, several formalisms and simulation tools have been adopted. The process of importing data from the source databases and introducing it in the simulation tools is usually done by hand. This work describes how to overcome this drawback by applying MDSD techniques (e.g. model transformations). Such techniques allow us to automate the data migration process between source databases and simulation tools, making the transformation process independent of the data persistence format, obtaining more modular tools and generating traceability information automatically.},
keywords = {Bioinformatics, Data Migration, Intergenomics, Model Driven Software Development (MDSD), Petri net (PN), Query/View/Transformation (QVT)},
pubstate = {published},
tppubtype = {article}
}
This paper shows how model-driven software development (MDSD) can be applied in the bioinformatics field since biological data structures can be easily expressed by means of models. The existence of several heterogeneous data sources is usual in the bioinformatics context. In order to validate the information stored in these data sources, several formalisms and simulation tools have been adopted. The process of importing data from the source databases and introducing it in the simulation tools is usually done by hand. This work describes how to overcome this drawback by applying MDSD techniques (e.g. model transformations). Such techniques allow us to automate the data migration process between source databases and simulation tools, making the transformation process independent of the data persistence format, obtaining more modular tools and generating traceability information automatically. Spanish |
Journal ArticleBeatriz Mora, Félix García, Francisco Ruiz, Mario Piattini, Artur Boronat, Abel Gómez, José Á. Carsí, Isidro Ramos JISBD2007-08: Software generic measurement framework based on MDA In: IEEE Latin America Transactions, vol. 6, no. 4, pp. 363-370, 2008, ISSN: 1548-0992. Abstract | Links | BibTeX | Tags: Framework for the Modeling and Evaluation of Software Processes (FMESP), Measurement, Model-Driven Architecture (MDA), MOMENT, Query/View/Transformation (QVT) @article{Mora:IEEELatAm:2008,
title = {JISBD2007-08: Software generic measurement framework based on MDA},
author = {Beatriz Mora and F\'{e}lix Garc\'{i}a and Francisco Ruiz and Mario Piattini and Artur Boronat and Abel G\'{o}mez and Jos\'{e} \'{A}. Cars\'{i} and Isidro Ramos},
doi = {10.1109/TLA.2008.4815290},
issn = {1548-0992},
year = {2008},
date = {2008-08-01},
journal = {IEEE Latin America Transactions},
volume = {6},
number = {4},
pages = {363-370},
abstract = {Currently, in order to obtain high quality software products it is necessary to carry out a good software processes management in which measurement is a fundamental factor. Due to the great diversity of entities involved in software measurement, a consistent framework to integrate the different entities in the measurement process is required. In this paper the software measurement framework (SMF) is presented, which supports the measurement of any type of software entity through the metamodels which depict them. In this framework, any software entity in any domain could be measured with a common software Measurement metamodel and by means of QVT transformations. This work explains the three fundamental elements of the software measurement framework (conceptual architecture, technological aspects and method). These elements have all been adapted to the MDE paradigm and to MDA technology, taking advantage of their benefits within the field of software measurement. Furthermore an example which illustrates the framework's application to a concrete domain is furthermore shown.},
keywords = {Framework for the Modeling and Evaluation of Software Processes (FMESP), Measurement, Model-Driven Architecture (MDA), MOMENT, Query/View/Transformation (QVT)},
pubstate = {published},
tppubtype = {article}
}
Currently, in order to obtain high quality software products it is necessary to carry out a good software processes management in which measurement is a fundamental factor. Due to the great diversity of entities involved in software measurement, a consistent framework to integrate the different entities in the measurement process is required. In this paper the software measurement framework (SMF) is presented, which supports the measurement of any type of software entity through the metamodels which depict them. In this framework, any software entity in any domain could be measured with a common software Measurement metamodel and by means of QVT transformations. This work explains the three fundamental elements of the software measurement framework (conceptual architecture, technological aspects and method). These elements have all been adapted to the MDE paradigm and to MDA technology, taking advantage of their benefits within the field of software measurement. Furthermore an example which illustrates the framework's application to a concrete domain is furthermore shown. Spanish |
ConferenceAbel Gómez, Artur Boronat, José Á. Carsí, Isidro Ramos Biological Data Transformation in Pathway Simulation Actas de las VIII Jornadas Nacionales de Bionformática (JNB 2008), Red Temática Nacional de Bioinformática, Valencia, Spain, 2008. Abstract | Links | BibTeX | Tags: Bioinformatics, Data Migration, Intergenomics, Petri net (PN), Query/View/Transformation (QVT) @conference{Gomez:JNB:2008,
title = {Biological Data Transformation in Pathway Simulation},
author = {Abel G\'{o}mez and Artur Boronat and Jos\'{e} \'{A}. Cars\'{i} and Isidro Ramos},
url = {https://abel.gomez.llana.me/wp-content/uploads/2017/11/gomez-jnb-2008.pdf},
year = {2008},
date = {2008-02-15},
booktitle = {Actas de las VIII Jornadas Nacionales de Bionform\'{a}tica (JNB 2008)},
publisher = {Red Tem\'{a}tica Nacional de Bioinform\'{a}tica},
address = {Valencia, Spain},
abstract = {This work shows how Model-Driven Software Development (MDSD) can be applied in the bioinformatics field since biological data structures can be easily expressed by means of models. The existence of several heterogeneous data sources is usual in the bioinformatics context. In order to validate the information stored in these data sources, several formalisms and simulation tools have been adopted. The process of importing data from the source databases and introducing it in the simulation tools is usually done by hand. This work describes how to overcome this drawback by applying MDSD techniques (e.g. model transformations). Such techniques allow us to automate the data migration process between source databases and simulation tools, making the transformation process independent of the data persistence format, obtaining more modular tools and generating traceability information automatically.},
keywords = {Bioinformatics, Data Migration, Intergenomics, Petri net (PN), Query/View/Transformation (QVT)},
pubstate = {published},
tppubtype = {conference}
}
This work shows how Model-Driven Software Development (MDSD) can be applied in the bioinformatics field since biological data structures can be easily expressed by means of models. The existence of several heterogeneous data sources is usual in the bioinformatics context. In order to validate the information stored in these data sources, several formalisms and simulation tools have been adopted. The process of importing data from the source databases and introducing it in the simulation tools is usually done by hand. This work describes how to overcome this drawback by applying MDSD techniques (e.g. model transformations). Such techniques allow us to automate the data migration process between source databases and simulation tools, making the transformation process independent of the data persistence format, obtaining more modular tools and generating traceability information automatically. Full Text AvailablePreprint |
ConferenceBeatriz Mora, Félix García, Francisco Ruiz, Mario Piattini, Artur Boronat, Abel Gómez, José Á. Carsí, Isidro Ramos Software Measurement by Using QVT Transformations in an MDA Context Proceedings of the Tenth International Conference on Enterprise Information Systems (ICEIS 2008), vol. 1, INSTICC SciTePress, 2008, ISBN: 978-989-8111-36-4, (Barcelona, Spain). Abstract | Links | BibTeX | Tags: Framework for the Modeling and Evaluation of Software Processes (FMESP), Measurement, Model-Driven Architecture (MDA), MOMENT, Query/View/Transformation (QVT) @conference{Mora:ICEIS:2008,
title = {Software Measurement by Using QVT Transformations in an MDA Context},
author = {Beatriz Mora and F\'{e}lix Garc\'{i}a and Francisco Ruiz and Mario Piattini and Artur Boronat and Abel G\'{o}mez and Jos\'{e} \'{A}. Cars\'{i} and Isidro Ramos},
editor = {Jos\'{e} Cordeiro and Joaquim Filipe},
doi = {10.5220/0001677901170124},
isbn = {978-989-8111-36-4},
year = {2008},
date = {2008-01-01},
booktitle = {Proceedings of the Tenth International Conference on Enterprise Information Systems (ICEIS 2008)},
volume = {1},
pages = {117--124},
publisher = {SciTePress},
organization = {INSTICC},
abstract = {At present the objective of obtaining quality software products has led to the necessity of carrying out good software processes management in which measurement is a fundamental factor. Due to the great diversity of entities involved in software measurement, a consistent framework is necessary to integrate the different entities in the measurement process. In this work a Software Measurement Framework (SMF) is presented to measure any type of software entity. In this framework, any software entity in any domain could be measured with a common Software Measurement metamodel and QVT transformations. This work explains the three fundamental elements of the Software Measurement Framework (conceptual architecture, technological aspects and method). These elements have all been adapted to the MDE paradigm and to MDA technology, taking advantage of their benefits within the field of software measurement. Furthermore an example which illustrates the framework’s application to a concrete domai n is furthermore shown. },
note = {Barcelona, Spain},
keywords = {Framework for the Modeling and Evaluation of Software Processes (FMESP), Measurement, Model-Driven Architecture (MDA), MOMENT, Query/View/Transformation (QVT)},
pubstate = {published},
tppubtype = {conference}
}
At present the objective of obtaining quality software products has led to the necessity of carrying out good software processes management in which measurement is a fundamental factor. Due to the great diversity of entities involved in software measurement, a consistent framework is necessary to integrate the different entities in the measurement process. In this work a Software Measurement Framework (SMF) is presented to measure any type of software entity. In this framework, any software entity in any domain could be measured with a common Software Measurement metamodel and QVT transformations. This work explains the three fundamental elements of the Software Measurement Framework (conceptual architecture, technological aspects and method). These elements have all been adapted to the MDE paradigm and to MDA technology, taking advantage of their benefits within the field of software measurement. Furthermore an example which illustrates the framework’s application to a concrete domai n is furthermore shown. Open Access |
2007
|
ConferenceAbel Gómez, José Á. Carsí, Artur Boronat, Isidro Ramos, Claudia Täubner, Silke Eckstein Biological Data Migration Using a Model-Driven Approach Proceedings of the 4th International Workshop on Language Engineering (ateM 2007), no. 4/2007, Mainzer Informatik-Berichte Johannes Gutenberg-Universität Mainz Institut für Informatik, 2007, ISSN: 0931-9972, (Nashville, TN, USA). Abstract | Links | BibTeX | Tags: Bioinformatics, Data Migration, Intergenomics, Model Driven Software Development (MDSD), Model Transformation (MT), Model-Driven Engineering (MDE), Petri net (PN), Query/View/Transformation (QVT) @conference{Gomez:ATEM:2007,
title = {Biological Data Migration Using a Model-Driven Approach},
author = {Abel G\'{o}mez and Jos\'{e} \'{A}. Cars\'{i} and Artur Boronat and Isidro Ramos and Claudia T\"{a}ubner and Silke Eckstein},
editor = {Jean-Marie Favre and Dragan Gasevic and Ralk L\"{a}mmel and Andreas Winter},
url = {https://abel.gomez.llana.me/wp-content/uploads/2017/11/gomez-atem-2007.pdf},
issn = {0931-9972},
year = {2007},
date = {2007-09-30},
booktitle = {Proceedings of the 4th International Workshop on Language Engineering (ateM 2007)},
number = {4/2007},
pages = {150--164},
publisher = {Institut f\"{u}r Informatik},
organization = {Johannes Gutenberg-Universit\"{a}t Mainz},
series = {Mainzer Informatik-Berichte},
abstract = {This paper shows how Model-Driven Software Development (MDSD) can be applied in the bioinformatics field since biological data structures can be easily expressed by means of models. The existence of several heterogeneous data sources is usual in the bioinformatics context. In order to validate the information stored in these data sources, several formalisms and simulation tools have been adopted. The process of importing data from the source databases and introducing it in the simulation tools is usually done by hand. This work describes how to overcome this drawback by applying MDSD techniques (e.g. model transformations). Such techniques allow us to automate the data migration process between source databases and simulation tools, making the transformation process independent of the data persistence format, obtaining more modular tools and generating traceability information automatically. },
note = {Nashville, TN, USA},
keywords = {Bioinformatics, Data Migration, Intergenomics, Model Driven Software Development (MDSD), Model Transformation (MT), Model-Driven Engineering (MDE), Petri net (PN), Query/View/Transformation (QVT)},
pubstate = {published},
tppubtype = {conference}
}
This paper shows how Model-Driven Software Development (MDSD) can be applied in the bioinformatics field since biological data structures can be easily expressed by means of models. The existence of several heterogeneous data sources is usual in the bioinformatics context. In order to validate the information stored in these data sources, several formalisms and simulation tools have been adopted. The process of importing data from the source databases and introducing it in the simulation tools is usually done by hand. This work describes how to overcome this drawback by applying MDSD techniques (e.g. model transformations). Such techniques allow us to automate the data migration process between source databases and simulation tools, making the transformation process independent of the data persistence format, obtaining more modular tools and generating traceability information automatically. Full Text AvailablePreprint |
ConferenceAbel Gómez, Artur Boronat, José Á. Carsí, Isidro Ramos MOMENT CASE: Un prototipo de herramienta CASE Actas de las XII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2007), Zaragoza, Spain, September 11-14, 2007., Thomson Editorial, 2007, ISBN: 978-84-9732-595-0. Abstract | Links | BibTeX | Tags: Computer Aided Design (CASE), DocBook, Maude, Model-Driven Engineering (MDE), MOMENT, Query/View/Transformation (QVT) @conference{Gomez:JISBD:demo:2007,
title = {MOMENT CASE: Un prototipo de herramienta CASE},
author = {Abel G\'{o}mez and Artur Boronat and Jos\'{e} \'{A}. Cars\'{i} and Isidro Ramos},
editor = {Xavier Franch},
url = {https://abel.gomez.llana.me/wp-content/uploads/2017/11/gomez-jisbd-demo-2007.pdf
https://abel.gomez.llana.me/wp-content/uploads/2017/11/gomez-jisbd-poster-2007.pdf},
isbn = {978-84-9732-595-0},
year = {2007},
date = {2007-09-11},
booktitle = {Actas de las XII Jornadas de Ingenier\'{i}a del Software y Bases de Datos (JISBD 2007), Zaragoza, Spain, September 11-14, 2007.},
pages = {389--390},
publisher = {Thomson Editorial},
abstract = {MOMENT CASE es un prototipo que mediante un proceso de desarrollo de software dirigido por modelos permite generar el c\'{o}digo SQL necesario para la creaci\'{o}n de una base de datos de un sistema de informaci\'{o}n, partiendo de la especificaci\'{o}n de \'{e}ste mediante un diagrama de clases UML, y mediante transformaciones de modelos sucesivas. La herramienta proporciona adem\'{a}s capacidades de trazabilidad y generaci\'{o}n autom\'{a}tica de documentaci\'{o}n.
Como motor para las transformaciones emplea la herramienta MOMENT (http://moment.dsic.upv.es.), que usa como back-end un potente sistema de reescritura de t\'{e}rminos. MOMENT CASE constituye un caso de estudio en el que convergen un marco formal de gesti\'{o}n de modelos y una herramienta de modelado industrial dando soporte a est\'{a}ndares abiertos como UML.
},
keywords = {Computer Aided Design (CASE), DocBook, Maude, Model-Driven Engineering (MDE), MOMENT, Query/View/Transformation (QVT)},
pubstate = {published},
tppubtype = {conference}
}
MOMENT CASE es un prototipo que mediante un proceso de desarrollo de software dirigido por modelos permite generar el código SQL necesario para la creación de una base de datos de un sistema de información, partiendo de la especificación de éste mediante un diagrama de clases UML, y mediante transformaciones de modelos sucesivas. La herramienta proporciona además capacidades de trazabilidad y generación automática de documentación.
Como motor para las transformaciones emplea la herramienta MOMENT (http://moment.dsic.upv.es.), que usa como back-end un potente sistema de reescritura de términos. MOMENT CASE constituye un caso de estudio en el que convergen un marco formal de gestión de modelos y una herramienta de modelado industrial dando soporte a estándares abiertos como UML.
Full Text AvailablePreprintSpanish |
ConferenceBeatriz Mora, Félix García, Francisco Ruiz, Mario Piattini, Artur Boronat, Abel Gómez, José Á. Carsí, Isidro Ramos Marco de Trabajo basado en MDA para la medición Genérica del Software Actas de las XII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2007), Zaragoza, Spain, September 11-14, 2007., Thomson Editorial, 2007, ISBN: 978-84-9732-595-0. Abstract | Links | BibTeX | Tags: Framework for the Modeling and Evaluation of Software Processes (FMESP), Measurement, Model-Driven Architecture (MDA), MOMENT, Query/View/Transformation (QVT) @conference{Mora:JISBD:2007,
title = {Marco de Trabajo basado en MDA para la medici\'{o}n Gen\'{e}rica del Software},
author = {Beatriz Mora and F\'{e}lix Garc\'{i}a and Francisco Ruiz and Mario Piattini and Artur Boronat and Abel G\'{o}mez and Jos\'{e} \'{A}. Cars\'{i} and Isidro Ramos},
editor = {Xavier Franch},
url = {https://abel.gomez.llana.me/wp-content/uploads/2017/11/mora-jisbd-2007.pdf},
isbn = {978-84-9732-595-0},
year = {2007},
date = {2007-09-11},
booktitle = {Actas de las XII Jornadas de Ingenier\'{i}a del Software y Bases de Datos (JISBD 2007), Zaragoza, Spain, September 11-14, 2007.},
pages = {211--220},
publisher = {Thomson Editorial},
abstract = {Actualmente, con el objetivo de obtener productos software de calidad es necesario llevar a cabo una buena gesti\'{o}n de los procesos software donde la medici\'{o}n de los procesos se convierte en un factor fundamental. Debido a la gran variedad de entidades candidatas para la medici\'{o}n, se considera necesario un marco consistente para integrar la medici\'{o}n de los distintos tipos de entidades. En este trabajo se presenta la propuesta de un entorno gen\'{e}rico para la medici\'{o}n de cualquier entidad software a partir de los metamodelos que las representan. Partiendo de un metamodelo com\'{u}n de medici\'{o}n, y mediante transformaciones QVT, se puede llevar a cabo la medici\'{o}n de un modelo de dominio cualquiera. En el trabajo se explica como se ha llevado a cabo la propuesta: por un lado, se ha trabajado con la herramienta MOMENT, que proporciona el soporte necesario para la gesti\'{o}n autom\'{a}tica de modelos de acuerdo a MDE y a la arquitectura MDA, por otro lado, se ha adaptado FMESP a MDA. FMESP es un marco de trabajo para la integraci\'{o}n del modelado y de la medici\'{o}n de procesos software, que sirve de base conceptual y tecnol\'{o}gica para su mejora. Adem\'{a}s, se muestran las etapas a seguir para conseguir la medici\'{o}n gen\'{e}rica basada en MDA, y un caso de ejemplo en el dominio de bases de datos relacionales.},
keywords = {Framework for the Modeling and Evaluation of Software Processes (FMESP), Measurement, Model-Driven Architecture (MDA), MOMENT, Query/View/Transformation (QVT)},
pubstate = {published},
tppubtype = {conference}
}
Actualmente, con el objetivo de obtener productos software de calidad es necesario llevar a cabo una buena gestión de los procesos software donde la medición de los procesos se convierte en un factor fundamental. Debido a la gran variedad de entidades candidatas para la medición, se considera necesario un marco consistente para integrar la medición de los distintos tipos de entidades. En este trabajo se presenta la propuesta de un entorno genérico para la medición de cualquier entidad software a partir de los metamodelos que las representan. Partiendo de un metamodelo común de medición, y mediante transformaciones QVT, se puede llevar a cabo la medición de un modelo de dominio cualquiera. En el trabajo se explica como se ha llevado a cabo la propuesta: por un lado, se ha trabajado con la herramienta MOMENT, que proporciona el soporte necesario para la gestión automática de modelos de acuerdo a MDE y a la arquitectura MDA, por otro lado, se ha adaptado FMESP a MDA. FMESP es un marco de trabajo para la integración del modelado y de la medición de procesos software, que sirve de base conceptual y tecnológica para su mejora. Además, se muestran las etapas a seguir para conseguir la medición genérica basada en MDA, y un caso de ejemplo en el dominio de bases de datos relacionales. Full Text AvailablePreprintSpanish |
ConferenceAbel Gómez, Artur Boronat, José Á. Carsí, Isidro Ramos, Claudia Täubner, Silke Eckstein Recuperación y procesado de datos biológicos mediante Ingeniería Dirigida por Modelos Actas de las XII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2007), Zaragoza, Spain, September 11-14, 2007., Thomson Editorial, 2007, ISBN: 978-84-9732-595-0. Abstract | Links | BibTeX | Tags: Bioinformatics, Data Migration, Intergenomics, Model Driven Software Development (MDSD), Model-Driven Development (MDD), Query/View/Transformation (QVT) @conference{Gomez:JISBD:2007,
title = {Recuperaci\'{o}n y procesado de datos biol\'{o}gicos mediante Ingenier\'{i}a Dirigida por Modelos},
author = {Abel G\'{o}mez and Artur Boronat and Jos\'{e} \'{A}. Cars\'{i} and Isidro Ramos and Claudia T\"{a}ubner and Silke Eckstein},
editor = {Xavier Franch},
url = {https://abel.gomez.llana.me/wp-content/uploads/2017/11/gomez-jisbd-2007.pdf},
isbn = {978-84-9732-595-0},
year = {2007},
date = {2007-09-11},
booktitle = {Actas de las XII Jornadas de Ingenier\'{i}a del Software y Bases de Datos (JISBD 2007), Zaragoza, Spain, September 11-14, 2007.},
pages = {275--284},
publisher = {Thomson Editorial},
abstract = {Este art\'{i}culo muestra c\'{o}mo el proceso de desarrollo de software dirigido por modelos (DSDM) es aplicable al campo de la bioinform\'{a}tica ya que la estructura de los datos biol\'{o}gicos se puede expresar mediante modelos de forma muy natural. En el contexto de la bioinform\'{a}tica es com\'{u}n la existencia de fuentes de datos (rellenadas de forma manual) heterog\'{e}neas. Con el objetivo de validar la informaci\'{o}n de estas fuentes de datos, se han adaptado diversos formalismos y herramientas de simulaci\'{o}n. El proceso de introducci\'{o}n de datos ---obtenidos de estas bases de datos--- en las herramientas de validaci\'{o}n se realiza tradicionalmente de forma manual. Este trabajo describe c\'{o}mo se ha resuelto este problema siguiendo una metodolog\'{i}a de DSDM empleando transformaciones de modelos. Esto permite automatizar el proceso de migraci\'{o}n de datos, obtener herramientas modulares, aislar el proceso de transformaci\'{o}n de datos de los formatos de persistencia de estos, y disponer de informaci\'{o}n de trazabilidad.},
keywords = {Bioinformatics, Data Migration, Intergenomics, Model Driven Software Development (MDSD), Model-Driven Development (MDD), Query/View/Transformation (QVT)},
pubstate = {published},
tppubtype = {conference}
}
Este artículo muestra cómo el proceso de desarrollo de software dirigido por modelos (DSDM) es aplicable al campo de la bioinformática ya que la estructura de los datos biológicos se puede expresar mediante modelos de forma muy natural. En el contexto de la bioinformática es común la existencia de fuentes de datos (rellenadas de forma manual) heterogéneas. Con el objetivo de validar la información de estas fuentes de datos, se han adaptado diversos formalismos y herramientas de simulación. El proceso de introducción de datos ---obtenidos de estas bases de datos--- en las herramientas de validación se realiza tradicionalmente de forma manual. Este trabajo describe cómo se ha resuelto este problema siguiendo una metodología de DSDM empleando transformaciones de modelos. Esto permite automatizar el proceso de migración de datos, obtener herramientas modulares, aislar el proceso de transformación de datos de los formatos de persistencia de estos, y disponer de información de trazabilidad. Full Text AvailablePreprintSpanish |
ConferenceArtur Boronat, Joaquín Oriente, Abel Gómez, José Á. Carsí, Isidro Ramos MOMENT-OCL: Algebraic Specifications of OCL 2.0 within the Eclipse Modeling Framework Proceedings of the 6th International Workshop on Rewriting Logic and its Applications (WRLA 2006), vol. 176, no. 4, Electronic Notes in Theoretical Computer Science Elsevier, 2007, ISSN: 1571-0661, (Viena, Austria). Abstract | Links | BibTeX | Tags: Algebraic Specifications, Maude, Model-Driven Development (MDD), MOMENT, Object Constraint Language (OCL) @conference{Boronat:WRLA:2006,
title = {MOMENT-OCL: Algebraic Specifications of OCL 2.0 within the Eclipse Modeling Framework},
author = {Artur Boronat and Joaqu\'{i}n Oriente and Abel G\'{o}mez and Jos\'{e} \'{A}. Cars\'{i} and Isidro Ramos},
doi = {10.1016/j.entcs.2007.06.018},
issn = {1571-0661},
year = {2007},
date = {2007-07-28},
booktitle = {Proceedings of the 6th International Workshop on Rewriting Logic and its Applications (WRLA 2006)},
volume = {176},
number = {4},
pages = {233--247},
publisher = {Elsevier},
chapter = {Rewriting Logic Systems},
series = {Electronic Notes in Theoretical Computer Science},
abstract = {Model-Driven Development is a field in Software Engineering that, for several years, has been representing software artifacts as models in order to improve productivity, quality, and economy. Models provide a more abstract description of a software artifact than the final code of the application. Interest in this field has grown in software development companies such as the Model-Driven Architecture (MDA), supported by OMG, and the Software Factories, supported by Microsoft, ensuring a model-driven technology stock for the near future.
Model-Driven Development has evolved to the Model-Driven Engineering field, where not only design and code generation tasks are involved, but also traceability, model management, meta-modeling issues, model interchange and persistence, etc. To fulfill these requirements, model transformations and model queries are relevant issues that must be addressed. In the MDA context, they are handled from an open-standard point of view. The standard Meta-Object Facilities (MOF) provides a way to define meta-models. The standard proposal Query/Views/Transformations (QVT) indicates how to provide support for both transformations and queries. In QVT, while new languages are provided for model transformation, the Object Constraint Language (OCL) remains the best choice for queries.
OCL is a textual language that is defined as a standard “add-on” to the UML standard. It is used to define constraints and queries on UML models, allowing the definition of more precise and more useful models. It can also be used to provide support for meta-modeling (MOF-based and Domain Specific Meta-modeling), model transformation, Aspect-Oriented Modeling, support for model testing and simulation, ontology development and validation for the Semantic Web, among others. Despite its many advantages, while there is wide acceptance for UML design in CASE tools, OCL lacks a well-suited technological support.
In this demonstration, we present the MOMENT-OCL tool, which integrates an algebraic specification of the operational semantics of part of the OCL 2.0 standard into the Eclipse Modeling Framework (EMF). EMF is a modeling environment that is plugged into the Eclipse platform and that provides a sort of implementation of the MOF. EMF enables the automatic import of software artifacts from heterogeneous data sources: UML models, relational schemas, and XML schemas. In MOMENT- OCL, OCL queries and invariants can be executed over instances of EMF models in Maude. An interesting feature of this algebraic specification of the OCL 2.0 is the use of the parameterization to reuse the OCL specification for any metamodel/model and the simulation of higher-order functions for the sake of the reuse of collection operator definitions. },
note = {Viena, Austria},
keywords = {Algebraic Specifications, Maude, Model-Driven Development (MDD), MOMENT, Object Constraint Language (OCL)},
pubstate = {published},
tppubtype = {conference}
}
Model-Driven Development is a field in Software Engineering that, for several years, has been representing software artifacts as models in order to improve productivity, quality, and economy. Models provide a more abstract description of a software artifact than the final code of the application. Interest in this field has grown in software development companies such as the Model-Driven Architecture (MDA), supported by OMG, and the Software Factories, supported by Microsoft, ensuring a model-driven technology stock for the near future.
Model-Driven Development has evolved to the Model-Driven Engineering field, where not only design and code generation tasks are involved, but also traceability, model management, meta-modeling issues, model interchange and persistence, etc. To fulfill these requirements, model transformations and model queries are relevant issues that must be addressed. In the MDA context, they are handled from an open-standard point of view. The standard Meta-Object Facilities (MOF) provides a way to define meta-models. The standard proposal Query/Views/Transformations (QVT) indicates how to provide support for both transformations and queries. In QVT, while new languages are provided for model transformation, the Object Constraint Language (OCL) remains the best choice for queries.
OCL is a textual language that is defined as a standard “add-on” to the UML standard. It is used to define constraints and queries on UML models, allowing the definition of more precise and more useful models. It can also be used to provide support for meta-modeling (MOF-based and Domain Specific Meta-modeling), model transformation, Aspect-Oriented Modeling, support for model testing and simulation, ontology development and validation for the Semantic Web, among others. Despite its many advantages, while there is wide acceptance for UML design in CASE tools, OCL lacks a well-suited technological support.
In this demonstration, we present the MOMENT-OCL tool, which integrates an algebraic specification of the operational semantics of part of the OCL 2.0 standard into the Eclipse Modeling Framework (EMF). EMF is a modeling environment that is plugged into the Eclipse platform and that provides a sort of implementation of the MOF. EMF enables the automatic import of software artifacts from heterogeneous data sources: UML models, relational schemas, and XML schemas. In MOMENT- OCL, OCL queries and invariants can be executed over instances of EMF models in Maude. An interesting feature of this algebraic specification of the OCL 2.0 is the use of the parameterization to reuse the OCL specification for any metamodel/model and the simulation of higher-order functions for the sake of the reuse of collection operator definitions. Open Access |
2006
|
ConferenceAbel Gómez, Artur Boronat, Pascual Queralt, José Á. Carsí, Isidro Ramos MOMENT: una herramienta de Gestión de Modelos aplicada a la Ingeniería Dirigida por Modelos Actas de las V Jornadas de Trabajo DYNAMICA, Ingeniería del Software y Sistemas de Información research group Universitat Politècnica de València, Valencia, Spain, 2006, ISBN: 84-690-2623-2. Abstract | Links | BibTeX | Tags: Algebraic Specifications, Maude, Model Management, Model Transformation (MT), Model-Driven Engineering (MDE), MOMENT @conference{Gomez:DYNAMICA:2006,
title = {MOMENT: una herramienta de Gesti\'{o}n de Modelos aplicada a la Ingenier\'{i}a Dirigida por Modelos},
author = {Abel G\'{o}mez and Artur Boronat and Pascual Queralt and Jos\'{e} \'{A}. Cars\'{i} and Isidro Ramos},
editor = {Jennifer P\'{e}rez and Manuel Llavador and Crist\'{o}bal Costa and Nour Ali},
url = {http://issi.dsic.upv.es/projects/DYNAMICA/jornadas06/actas/actasDYNAMICA06.pdf},
isbn = {84-690-2623-2},
year = {2006},
date = {2006-11-23},
booktitle = {Actas de las V Jornadas de Trabajo DYNAMICA},
pages = {141--142},
publisher = {Universitat Polit\`{e}cnica de Val\`{e}ncia},
address = {Valencia, Spain},
organization = {Ingenier\'{i}a del Software y Sistemas de Informaci\'{o}n research group},
abstract = {La Ingenier\'{i}a Dirigida por modelos es un campo en la Ingenier\'{i}a del Software que, durante a\~{n}os, ha representado los artefactos software como modeles con el objetivo de incrementar la productividad, calidad, y reducir los gastos en el proceso de desarrollo de software. Los modelos proporcionan una descripci\'{o}n m\'{a}s abstracta de un artefacto software que el c\'{o}digo final de la aplicaci\'{o}n. Las compa\~{n}\'{i}as de desarrollo de software han aumentado su inter\'{e}s en este campo. Como por ejemplo encontramos las aproximaciones Model Driven Architecture, apoyada por la OMG, as\'{i} como las Software Factories, apoyadas en este caso por Microsoft.
El Desarrollo Dirigido por Modelos ha evolucionado del campo de la Ingenier\'{i}a Dirigida por Modelos. En \'{e}l, no s\'{o}lo las tareas de dise\~{n}o y generaci\'{o}n de c\'{o}digo est\'{a}n involucradas, sino que tambi\'{e}n se incluyen las capacidades de trazabilidad, gesti\'{o}n de modelos, tareas de meta-modelado, intercambio y persistencia de modelos, etc. Para poder abordar estas tareas, las operaciones entre modelos, transformaciones, y consultas sobre ellos son problemas relevantes que deben ser resueltos. En el contexto de MDA se abordan desde el punto de vista de los est\'{a}ndares abiertos. El este caso, el est\'{a}ndar Meta Object Facility (MOF), proporciona un mecanismo para definir metamodelos. Por su parte, el est\'{a}ndar Query/Views/Transformations (QVT) indica c\'{o}mo proporcionar soporte tanto para transformaciones como para consultas. A diferencia de otros lenguajes nuevos, QVT se apoya en el ya existente lenguaje Object Constraint Language (OCL) para realizar las consultas sobre los artefactos software. Adem\'{a}s, dentro de la ingenier\'{i}a dirigida por modelos se ha propuesto una nueva disciplina denominada Gesti\'{o}n de Modelos. \'{E}sta considera los modelos y las correspondencias entre ellos como entidades de primer orden, proporcionando un conjunto de operadores independientes de metamodelo y basados en teor\'{i}a de conjuntos para tratar con ellos (Merge, Cross, Diff, ModelGen, etc.). Estos operadores proporcionan una soluci\'{o}n reutilizable y componible para las tareas descritas anteriormente.
En esta demo presentamos la herramienta MOMENT, que da soporte a todas estas aproximaciones surgidas dentro de la Ingenier\'{i}a por modelos. MOMENT proporciona un soporte algebraico a los operadores de gesti\'{o}n de modelos, as\'{i} como a las tareas de transformaci\'{o}n y consulta de modelos mediante un eficiente sistema de reescritura de t\'{e}rminos \textemdashMaude\textemdash y desde un entorno de modelado industrial \textemdashEclipse Modeling Framework (EMF)\textemdash. EMF puede ser visto como una implementaci\'{o}n del est\'{a}ndar MOF, y permite la importaci\'{o}n autom\'{a}tica de artefactos software desde or\'{i}genes de datos heterog\'{e}neos: modelos UML, esquemas relacionales, esquemas XML, etc. En este sentido MOMENT aprovecha las capacidades de modularidad y parametrizaci\'{o}n de Maude para proporcionar un entorno de gesti\'{o}n, transformaci\'{o}n y consulta de modelos de forma gen\'{e}rica e independiente de metamodelo.
},
keywords = {Algebraic Specifications, Maude, Model Management, Model Transformation (MT), Model-Driven Engineering (MDE), MOMENT},
pubstate = {published},
tppubtype = {conference}
}
La Ingeniería Dirigida por modelos es un campo en la Ingeniería del Software que, durante años, ha representado los artefactos software como modeles con el objetivo de incrementar la productividad, calidad, y reducir los gastos en el proceso de desarrollo de software. Los modelos proporcionan una descripción más abstracta de un artefacto software que el código final de la aplicación. Las compañías de desarrollo de software han aumentado su interés en este campo. Como por ejemplo encontramos las aproximaciones Model Driven Architecture, apoyada por la OMG, así como las Software Factories, apoyadas en este caso por Microsoft.
El Desarrollo Dirigido por Modelos ha evolucionado del campo de la Ingeniería Dirigida por Modelos. En él, no sólo las tareas de diseño y generación de código están involucradas, sino que también se incluyen las capacidades de trazabilidad, gestión de modelos, tareas de meta-modelado, intercambio y persistencia de modelos, etc. Para poder abordar estas tareas, las operaciones entre modelos, transformaciones, y consultas sobre ellos son problemas relevantes que deben ser resueltos. En el contexto de MDA se abordan desde el punto de vista de los estándares abiertos. El este caso, el estándar Meta Object Facility (MOF), proporciona un mecanismo para definir metamodelos. Por su parte, el estándar Query/Views/Transformations (QVT) indica cómo proporcionar soporte tanto para transformaciones como para consultas. A diferencia de otros lenguajes nuevos, QVT se apoya en el ya existente lenguaje Object Constraint Language (OCL) para realizar las consultas sobre los artefactos software. Además, dentro de la ingeniería dirigida por modelos se ha propuesto una nueva disciplina denominada Gestión de Modelos. Ésta considera los modelos y las correspondencias entre ellos como entidades de primer orden, proporcionando un conjunto de operadores independientes de metamodelo y basados en teoría de conjuntos para tratar con ellos (Merge, Cross, Diff, ModelGen, etc.). Estos operadores proporcionan una solución reutilizable y componible para las tareas descritas anteriormente.
En esta demo presentamos la herramienta MOMENT, que da soporte a todas estas aproximaciones surgidas dentro de la Ingeniería por modelos. MOMENT proporciona un soporte algebraico a los operadores de gestión de modelos, así como a las tareas de transformación y consulta de modelos mediante un eficiente sistema de reescritura de términos —Maude— y desde un entorno de modelado industrial —Eclipse Modeling Framework (EMF)—. EMF puede ser visto como una implementación del estándar MOF, y permite la importación automática de artefactos software desde orígenes de datos heterogéneos: modelos UML, esquemas relacionales, esquemas XML, etc. En este sentido MOMENT aprovecha las capacidades de modularidad y parametrización de Maude para proporcionar un entorno de gestión, transformación y consulta de modelos de forma genérica e independiente de metamodelo.
Open AccessSpanish |
ConferenceAbel Gómez, Artur Boronat, Luis Hoyos, José Á. Carsí, Isidro Ramos Definición de operaciones complejas con un lenguaje específico de dominio en Gestión de Modelos XI Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2006), Octubre 3-6, 2006, Sitges, Barcelona, Spain., CIMNE, Barcelona, Spain, 2006, ISBN: 84-95999-99-4, (Sitges, Barcelona, Spain). Abstract | Links | BibTeX | Tags: Domain-Specific Languages (DSLs), Maude, Model Management, Model-Driven Engineering (MDE), MOMENT @conference{DBLP:conf/jisbd/GomezBHCR06,
title = {Definici\'{o}n de operaciones complejas con un lenguaje espec\'{i}fico de dominio en Gesti\'{o}n de Modelos},
author = {Abel G\'{o}mez and Artur Boronat and Luis Hoyos and Jos\'{e} \'{A}. Cars\'{i} and Isidro Ramos},
editor = {Jos\'{e} Riquelme and Pere Botella},
url = {https://abel.gomez.llana.me/wp-content/uploads/2017/11/gomez-jisbd-2006.pdf},
isbn = {84-95999-99-4},
year = {2006},
date = {2006-10-03},
booktitle = {XI Jornadas de Ingenier\'{i}a del Software y Bases de Datos (JISBD 2006), Octubre 3-6, 2006, Sitges, Barcelona, Spain.},
pages = {215--224},
publisher = {CIMNE},
address = {Barcelona, Spain},
abstract = {La Ingenier\'{i}a dirigida por Modelos permite incrementar la productividad en el proceso de desarrollo software, obteniendo herramientas m\'{a}s interoperables y sencillas de mantener mediante t\'{e}cnicas que elevan el nivel de abstracci\'{o}n. En esta direcci\'{o}n ha aparecido la disciplina «Gesti\'{o}n de Modelos», que proporciona un conjunto de operadores gen\'{e}ricos basados en teor\'{i}a de conjuntos para tratar con modelos. Esta aproximaci\'{o}n muestra su potencia en las capacidades de composicionalidad de los operadores que proporciona. Este art\'{i}culo describe c\'{o}mo proporciona soporte a la definici\'{o}n de operadores complejos una herramienta del marco de la Gesti\'{o}n de Modelos mediante un lenguaje espec\'{i}fico de dominio.},
note = {Sitges, Barcelona, Spain},
keywords = {Domain-Specific Languages (DSLs), Maude, Model Management, Model-Driven Engineering (MDE), MOMENT},
pubstate = {published},
tppubtype = {conference}
}
La Ingeniería dirigida por Modelos permite incrementar la productividad en el proceso de desarrollo software, obteniendo herramientas más interoperables y sencillas de mantener mediante técnicas que elevan el nivel de abstracción. En esta dirección ha aparecido la disciplina «Gestión de Modelos», que proporciona un conjunto de operadores genéricos basados en teoría de conjuntos para tratar con modelos. Esta aproximación muestra su potencia en las capacidades de composicionalidad de los operadores que proporciona. Este artículo describe cómo proporciona soporte a la definición de operadores complejos una herramienta del marco de la Gestión de Modelos mediante un lenguaje específico de dominio. Full Text AvailableSpanish |
ConferenceArtur Boronat, Joaquín Oriente, Abel Gómez, Isidro Ramos, José Á. Carsí An Algebraic Specification of Generic OCL Queries Within the Eclipse Modeling Framework Model Driven Architecture -- Foundations and Applications: Second European Conference, ECMDA-FA 2006, Bilbao, Spain, July 10-13, 2006. Proceedings, vol. 4066, Lecture Notes in Computer Science Springer, Berlin, Heidelberg, 2006, ISBN: 978-3-540-35910-4. Abstract | Links | BibTeX | Tags: Algebraic Specifications, Maude, Model-Driven Architecture (MDA), MOMENT, Object Constraint Language (OCL) @conference{Boronat:ECMDA-FA:2006,
title = {An Algebraic Specification of Generic OCL Queries Within the Eclipse Modeling Framework},
author = {Artur Boronat and Joaqu\'{i}n Oriente and Abel G\'{o}mez and Isidro Ramos and Jos\'{e} \'{A}. Cars\'{i}},
editor = {Arend Rensink and Jos Warmer},
doi = {10.1007/11787044_24},
isbn = {978-3-540-35910-4},
year = {2006},
date = {2006-07-10},
booktitle = {Model Driven Architecture -- Foundations and Applications: Second European Conference, ECMDA-FA 2006, Bilbao, Spain, July 10-13, 2006. Proceedings},
volume = {4066},
pages = {316--330},
publisher = {Springer},
address = {Berlin, Heidelberg},
series = {Lecture Notes in Computer Science},
abstract = {In the Model-Driven Architecture initiative, software artefacts are represented by means of models that can be manipulated. Such manipulations can be performed by means of transformations and queries. The standard Query/Views/Transformations and the standard language OCL are becoming suitable languages for these purposes. This paper presents an algebraic specification of the operational semantics of part of the OCL 2.0 standard, focusing on queries. This algebraic specification of OCL can be used within the Eclipse Modeling Framework to represent models in an algebraic setting and to perform queries or transformations over software artefacts that can be represented as models: model instances, models, metamodels, etc. In addition, a prototype for executing such OCL queries and invariants over EMF models is presented. This prototype provides a compiler of the OCL standard language that targets an algebraic specification of OCL, which runs on the term rewriting system Maude.},
keywords = {Algebraic Specifications, Maude, Model-Driven Architecture (MDA), MOMENT, Object Constraint Language (OCL)},
pubstate = {published},
tppubtype = {conference}
}
In the Model-Driven Architecture initiative, software artefacts are represented by means of models that can be manipulated. Such manipulations can be performed by means of transformations and queries. The standard Query/Views/Transformations and the standard language OCL are becoming suitable languages for these purposes. This paper presents an algebraic specification of the operational semantics of part of the OCL 2.0 standard, focusing on queries. This algebraic specification of OCL can be used within the Eclipse Modeling Framework to represent models in an algebraic setting and to perform queries or transformations over software artefacts that can be represented as models: model instances, models, metamodels, etc. In addition, a prototype for executing such OCL queries and invariants over EMF models is presented. This prototype provides a compiler of the OCL standard language that targets an algebraic specification of OCL, which runs on the term rewriting system Maude. |
2005
|
ConferenceAbel Gómez, Artur Boronat, José Á. Carsí, Isidro Ramos Integración de un sistema de reescritura de términos en una herramienta de desarrollo software industrial Actas de las IV Jornadas de Trabajo DYNAMICA, Archena, Murcia, España, 2005. Abstract | Links | BibTeX | Tags: Application Programming Interface (API), Maude, MOMENT @conference{Gomez:DYNAMICA:2005,
title = {Integraci\'{o}n de un sistema de reescritura de t\'{e}rminos en una herramienta de desarrollo software industrial},
author = {Abel G\'{o}mez and Artur Boronat and Jos\'{e} \'{A}. Cars\'{i} and Isidro Ramos},
url = {https://abel.gomez.llana.me/wp-content/uploads/2017/11/gomez-dynamica-2005.pdf},
year = {2005},
date = {2005-11-18},
booktitle = {Actas de las IV Jornadas de Trabajo DYNAMICA},
pages = {87-99},
address = {Archena, Murcia, Espa\~{n}a},
abstract = {Los m\'{e}todos formales proporcionan buenas propiedades para abordar problemas de Ingenier\'{i}a del Software: validaci\'{o}n de sistemas, integraci\'{o}n de artefactos software, etc. En este sentido, diversas han sido las aproximaciones formales para la resoluci\'{o}n de problemas en Ingenier\'{i}a de Modelos, por ejemplo, mediante teor\'{i}a de grafos, o reescritura de t\'{e}rminos. En esta \'{u}ltima aproximaci\'{o}n encontramos Maude: un potente sistema basado en l\'{o}gica ecuacional y l\'{o}gica de reescritura. A pesar de todo esto, debido a prejuicios o malas experiencias, las herramientas industriales no suelen apoyarse en estos m\'{e}todos, abordando la resoluci\'{o}n los problemas de forma ad-hoc. En este contexto se ha desarrollado un conjunto de herramientas que integran el sistema formal Maude en un entorno de desarrollo industrial como es Eclipse. Este art\'{i}culo muestra las caracter\'{i}sticas de estas herramientas y las posibilidades que ofrecen al usuario y futuros desarrolladores.},
keywords = {Application Programming Interface (API), Maude, MOMENT},
pubstate = {published},
tppubtype = {conference}
}
Los métodos formales proporcionan buenas propiedades para abordar problemas de Ingeniería del Software: validación de sistemas, integración de artefactos software, etc. En este sentido, diversas han sido las aproximaciones formales para la resolución de problemas en Ingeniería de Modelos, por ejemplo, mediante teoría de grafos, o reescritura de términos. En esta última aproximación encontramos Maude: un potente sistema basado en lógica ecuacional y lógica de reescritura. A pesar de todo esto, debido a prejuicios o malas experiencias, las herramientas industriales no suelen apoyarse en estos métodos, abordando la resolución los problemas de forma ad-hoc. En este contexto se ha desarrollado un conjunto de herramientas que integran el sistema formal Maude en un entorno de desarrollo industrial como es Eclipse. Este artículo muestra las características de estas herramientas y las posibilidades que ofrecen al usuario y futuros desarrolladores. Open AccessSpanish |
ConferenceArtur Boronat, José Iborra, José Á. Carsí, Isidro Ramos, Abel Gómez Del método formal a la aplicación industrial en Gestión de Modelos: Maude aplicado a Eclipse Modeling Framework Actas de las X Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2005), September 14-16, 2005, Granada, Spain, Thomson, 2005, ISBN: 84-9732-434-X. Abstract | BibTeX | Tags: Algebraic Specifications, Maude, Model Management, MOMENT @conference{Boronat:JISBD:2005,
title = {Del m\'{e}todo formal a la aplicaci\'{o}n industrial en Gesti\'{o}n de Modelos: Maude aplicado a Eclipse Modeling Framework},
author = {Artur Boronat and Jos\'{e} Iborra and Jos\'{e} \'{A}. Cars\'{i} and Isidro Ramos and Abel G\'{o}mez},
editor = {Jos\'{e} Ambrosio Toval \'{A}lvarez and Juan Hern\'{a}ndez N\'{u}\~{n}ez},
isbn = {84-9732-434-X},
year = {2005},
date = {2005-09-14},
booktitle = {Actas de las X Jornadas de Ingenier\'{i}a del Software y Bases de Datos (JISBD 2005), September 14-16, 2005, Granada, Spain},
pages = {253--258},
publisher = {Thomson},
abstract = {Los m\'{e}todos formales proporcionan buenas propiedades para abordar problemas en Ingenier\'{i}a del Software. Sin embargo, en muchos casos no se suelen aplicar en un \'{a}mbito industrial debido a prejuicios o malas experiencias. En este art\'{i}culo, se presenta un caso de \'{e}xito de la aplicaci\'{o}n de especificaciones algebraicas en un entorno industrial de modelado para dar soporte a la Gesti\'{o}n de Modelos. Esta disciplina es una nueva tendencia dentro de la Ingenier\'{i}a de Modelos que trata a los modelos como ciudadanos de primer orden y que proporciona una serie de operadores gen\'{e}ricos para manipularlos. Se ha especificado algebraicamente un conjunto de operadores de este tipo utilizando el lenguaje Maude. Estos operadores se utilizan de forma visual desde Eclipse Modeling Framework (EMF). En este art\'{i}culo se presenta el soporte que se ofrece para la interoperabilidad entre Maude y EMF en una herramienta de gesti\'{o}n de modelos.},
keywords = {Algebraic Specifications, Maude, Model Management, MOMENT},
pubstate = {published},
tppubtype = {conference}
}
Los métodos formales proporcionan buenas propiedades para abordar problemas en Ingeniería del Software. Sin embargo, en muchos casos no se suelen aplicar en un ámbito industrial debido a prejuicios o malas experiencias. En este artículo, se presenta un caso de éxito de la aplicación de especificaciones algebraicas en un entorno industrial de modelado para dar soporte a la Gestión de Modelos. Esta disciplina es una nueva tendencia dentro de la Ingeniería de Modelos que trata a los modelos como ciudadanos de primer orden y que proporciona una serie de operadores genéricos para manipularlos. Se ha especificado algebraicamente un conjunto de operadores de este tipo utilizando el lenguaje Maude. Estos operadores se utilizan de forma visual desde Eclipse Modeling Framework (EMF). En este artículo se presenta el soporte que se ofrece para la interoperabilidad entre Maude y EMF en una herramienta de gestión de modelos. Spanish |
ConferenceArtur Boronat, José Iborra, José Á. Carsí, Isidro Ramos, Abel Gómez Utilización de Maude desde Eclipse Modeling Framework para la Gestión de Modelos Actas del II Taller sobre Desarrollo Dirigido por Modelos. MDA y Aplicaciones. (DSDM '05). Granada, España, Septiembre 13, 2005., vol. 157, CEUR Workshop Proceedings, Granada, Spain, 2005, ISSN: 1613-0073. Abstract | Links | BibTeX | Tags: Algebraic Specifications, Maude, Model Management, MOMENT @conference{Boronat:DSDM:2005,
title = {Utilizaci\'{o}n de Maude desde Eclipse Modeling Framework para la Gesti\'{o}n de Modelos},
author = {Artur Boronat and Jos\'{e} Iborra and Jos\'{e} \'{A}. Cars\'{i} and Isidro Ramos and Abel G\'{o}mez},
editor = {Antonio Est\'{e}vez and Vicente Pelechano and Antonio Vallecillo},
url = {http://ceur-ws.org/Vol-157/paper05.pdf},
issn = {1613-0073},
year = {2005},
date = {2005-09-13},
booktitle = {Actas del II Taller sobre Desarrollo Dirigido por Modelos. MDA y Aplicaciones. (DSDM '05). Granada, Espa\~{n}a, Septiembre 13, 2005.},
volume = {157},
publisher = {CEUR Workshop Proceedings},
address = {Granada, Spain},
abstract = {Los m\'{e}todos formales proporcionan buenas propiedades para abordar problemas en Ingenier\'{i}a del Software. Sin embargo, en muchos casos no se suelen aplicar en un \'{a}mbito industrial debido a prejuicios o malas experiencias. En este art\'{i}culo, se presenta un caso de \'{e}xito de la aplicaci\'{o}n de especificaciones algebraicas en un entorno industrial de modelado para dar soporte a la Gesti\'{o}n de Modelos. Esta disciplina es una nueva tendencia dentro de la Ingenier\'{i}a de Modelos que trata a los modelos como ciudadanos de primer orden y que proporciona una serie de operadores gen\'{e}ricos para manipularlos. Se ha especificado algebraicamente un conjunto de operadores de este tipo utilizando el lenguaje Maude. Estos operadores se utilizan de forma visual desde Eclipse Modeling Framework (EMF). En este art\'{i}culo se presenta el soporte que se ofrece para la interoperabilidad entre Maude y EMF en una herramienta de gesti\'{o}n de modelos. },
keywords = {Algebraic Specifications, Maude, Model Management, MOMENT},
pubstate = {published},
tppubtype = {conference}
}
Los métodos formales proporcionan buenas propiedades para abordar problemas en Ingeniería del Software. Sin embargo, en muchos casos no se suelen aplicar en un ámbito industrial debido a prejuicios o malas experiencias. En este artículo, se presenta un caso de éxito de la aplicación de especificaciones algebraicas en un entorno industrial de modelado para dar soporte a la Gestión de Modelos. Esta disciplina es una nueva tendencia dentro de la Ingeniería de Modelos que trata a los modelos como ciudadanos de primer orden y que proporciona una serie de operadores genéricos para manipularlos. Se ha especificado algebraicamente un conjunto de operadores de este tipo utilizando el lenguaje Maude. Estos operadores se utilizan de forma visual desde Eclipse Modeling Framework (EMF). En este artículo se presenta el soporte que se ofrece para la interoperabilidad entre Maude y EMF en una herramienta de gestión de modelos. Open AccessSpanish |