2022
|
Journal ArticleSimona Bernardi, Abel Gómez, José Merseguer, Diego Perez-Palacin, José I. Requeno DICE simulation: a tool for software performance assessment at the design stage In: Automated Software Engineering, vol. 29, pp. 36, 2022, ISSN: 1573-7535. Abstract | Links | BibTeX | Tags: Data-Intensive Applications (DIA), DICE, Model-Driven Engineering (MDE), performance evaluation tools, software performance, Unified Modeling Language (UML) @article{Bernardi:AUSE:2022,
title = {DICE simulation: a tool for software performance assessment at the design stage},
author = {Simona Bernardi and Abel G\'{o}mez and Jos\'{e} Merseguer and Diego Perez-Palacin and Jos\'{e} I. Requeno},
url = {https://rdcu.be/cJ2Wt},
doi = {10.1007/s10515-022-00335-z},
issn = {1573-7535},
year = {2022},
date = {2022-03-28},
urldate = {2022-03-28},
journal = {Automated Software Engineering},
volume = {29},
pages = {36},
abstract = {In recent years, we have seen many performance fiascos in the deployment of new systems, such as the US health insurance web. This paper describes the functionality and architecture, as well as success stories, of a tool that helps address these types of issues. The tool allows assessing software designs regarding quality, in particular performance and reliability. Starting from a UML design with quality annotations, the tool applies model-transformation techniques to yield analyzable models. Such models are then leveraged by the tool to compute quality metrics. Finally, quality results, over the design, are presented to the engineer, in terms of the problem domain. Hence, the tool is an asset for the software engineer to evaluate system quality through software designs. While leveraging the Eclipse platform, the tool uses UML and the MARTE, DAM and DICE profiles for the system design and the quality modeling.},
keywords = {Data-Intensive Applications (DIA), DICE, Model-Driven Engineering (MDE), performance evaluation tools, software performance, Unified Modeling Language (UML)},
pubstate = {published},
tppubtype = {article}
}
In recent years, we have seen many performance fiascos in the deployment of new systems, such as the US health insurance web. This paper describes the functionality and architecture, as well as success stories, of a tool that helps address these types of issues. The tool allows assessing software designs regarding quality, in particular performance and reliability. Starting from a UML design with quality annotations, the tool applies model-transformation techniques to yield analyzable models. Such models are then leveraged by the tool to compute quality metrics. Finally, quality results, over the design, are presented to the engineer, in terms of the problem domain. Hence, the tool is an asset for the software engineer to evaluate system quality through software designs. While leveraging the Eclipse platform, the tool uses UML and the MARTE, DAM and DICE profiles for the system design and the quality modeling. Full Text AvailableOpen Access |
2018
|
ConferenceAbel Gómez, Connie U. Smith, Amy Spellmann, Jordi Cabot Enabling Performance Modeling for the Masses: Initial Experiences System Analysis and Modeling. Languages, Methods, and Tools for Systems Engineering, vol. 11150, Lecture Notes in Computer Science Springer International Publishing, Cham, 2018, ISBN: 978-3-030-01042-3. Abstract | Links | BibTeX | Tags: Model-Driven Engineering (MDE), Query/View/Transformation (QVT), S-PMIF+, software performance @conference{Gomez:SAM:2018,
title = {Enabling Performance Modeling for the Masses: Initial Experiences},
author = {Abel G\'{o}mez and Connie U. Smith and Amy Spellmann and Jordi Cabot},
editor = {Ferhat Khendek and Reinhard Gotzhein},
doi = {10.1007/978-3-030-01042-3_7},
isbn = {978-3-030-01042-3},
year = {2018},
date = {2018-09-26},
booktitle = {System Analysis and Modeling. Languages, Methods, and Tools for Systems Engineering},
volume = {11150},
pages = {105--126},
publisher = {Springer International Publishing},
address = {Cham},
series = {Lecture Notes in Computer Science},
abstract = {Performance problems such as sluggish response time or low throughput are especially annoying, frustrating and noticeable to users. Fixing performance problems after they occur results in unplanned expenses and time. Our vision is an MDE-intensive software development paradigm for complex systems in which software designers can evaluate performance early in development, when the analysis can have the greatest impact. We seek to empower designers to do the analysis themselves by automating the creation of performance models out of standard design models. Such performance models can be automatically solved, providing results meaningful to them. In our vision, this automation can be enabled by using model-to-model transformations: First, designers create UML design models embellished with the Modeling and Analysis of Real Time and Embedded systems (MARTE) design specifications; and secondly, such models are transformed to automatically solvable performance models by using QVT. This paper reports on our first experiences when implementing these two initial activities.},
keywords = {Model-Driven Engineering (MDE), Query/View/Transformation (QVT), S-PMIF+, software performance},
pubstate = {published},
tppubtype = {conference}
}
Performance problems such as sluggish response time or low throughput are especially annoying, frustrating and noticeable to users. Fixing performance problems after they occur results in unplanned expenses and time. Our vision is an MDE-intensive software development paradigm for complex systems in which software designers can evaluate performance early in development, when the analysis can have the greatest impact. We seek to empower designers to do the analysis themselves by automating the creation of performance models out of standard design models. Such performance models can be automatically solved, providing results meaningful to them. In our vision, this automation can be enabled by using model-to-model transformations: First, designers create UML design models embellished with the Modeling and Analysis of Real Time and Embedded systems (MARTE) design specifications; and secondly, such models are transformed to automatically solvable performance models by using QVT. This paper reports on our first experiences when implementing these two initial activities. |
Conference Connie U. Smith, Vittorio Cortellessa, Abel Gómez, Samuel Kounev, Catalina Lladó, Murray Woodside Challenges in Automating Performance Tool Support Companion of the 2018 ACM/SPEC International Conference on Performance Engineering, ICPE '18 ACM, Berlin, Germany, 2018, ISBN: 978-1-4503-5629-9. Abstract | Links | BibTeX | Tags: modeling tools, performance evaluation tools, software performance @conference{Smith:WOSPC:2018,
title = {Challenges in Automating Performance Tool Support},
author = { Connie U. Smith and Vittorio Cortellessa and Abel G\'{o}mez and Samuel Kounev and Catalina Llad\'{o} and Murray Woodside},
url = {https://abel.gomez.llana.me/wp-content/uploads/2018/06/smith-2018-wospc.pdf},
doi = {10.1145/3185768.3186410},
isbn = {978-1-4503-5629-9},
year = {2018},
date = {2018-01-01},
booktitle = {Companion of the 2018 ACM/SPEC International Conference on Performance Engineering},
pages = {175--176},
publisher = {ACM},
address = {Berlin, Germany},
series = {ICPE '18},
abstract = {Research and development (R\&D) of new tools for performance analysis faces many challenges from immaturity and lack of documentation of supporting tools and infrastructure, incompatibility of tools, lack of access to realistic case studies and performance parameters for them, validation of results, time required versus benefit of results, subsequent maintenance, and many, many others. Yet tool development is an essential part of practical R\&D. The panelists relay experiences in developing tools, discuss what needs improvement, opportunities in developing R\&D tools, and offer advice for researchers. After introductory remarks from each panelist, there will be a discussion session with the audience.},
keywords = {modeling tools, performance evaluation tools, software performance},
pubstate = {published},
tppubtype = {conference}
}
Research and development (R&D) of new tools for performance analysis faces many challenges from immaturity and lack of documentation of supporting tools and infrastructure, incompatibility of tools, lack of access to realistic case studies and performance parameters for them, validation of results, time required versus benefit of results, subsequent maintenance, and many, many others. Yet tool development is an essential part of practical R&D. The panelists relay experiences in developing tools, discuss what needs improvement, opportunities in developing R&D tools, and offer advice for researchers. After introductory remarks from each panelist, there will be a discussion session with the audience. Full Text Available |