Publications
L. Carnevali, M. Paolieri, R. Reali, L. Scommegna, E. Vicario
Cost-Effective Software Rejuvenation Combining Time-Based and Inspection-Based Policies
IEEE Transactions on Emerging Topics in Computing, 2024
Software rejuvenation is a proactive maintenance technique that counteracts software aging by restarting a system, making selection of rejuvenation times critical to improve reliability without incurring excessive downtime costs. Various stochastic models of Software Aging and Rejuvenation~(SAR) have been developed, mostly having an underlying stochastic process in the class of Continuous Time Markov Chains~(CTMCs), Semi-Markov Processes~(SMPs), and Markov Regenerative Processes~(MRGPs) under the enabling restriction, requiring that at most one general~(GEN), i.e.,~non-Exponential, timer be enabled in each state. We present a SAR model with an underlying MRGP under the bounded regeneration restriction, allowing for multiple GEN timers to be concurrently enabled in each state. This expressivity gain not only supports more accurate fitting of duration distributions from observed statistics, but also enables the definition of mixed rejuvenation strategies combining time-based and inspection-based policies, where the time to the next inspection or rejuvenation depends on the outcomes of diagnostic tests. Experimental results show that replacing GEN timers with Exponential timers with the same mean (to satisfy the enabling restriction) yields inaccurate rejuvenation policies, and that mixed rejuvenation outperforms time-based rejuvenation in maximizing reliability, though at the cost of an acceptable decrease in availability.@article{carnevali2024cost, author={Carnevali, Laura and Paolieri, Marco and Reali, Riccardo and Scommegna, Leonardo and Vicario, Enrico}, journal={IEEE Transactions on Emerging Topics in Computing}, title={Cost-Effective Software Rejuvenation Combining Time-Based and Inspection-Based Policies}, year={2024}, pages={1-16}, doi={10.1109/TETC.2024.3475214} }L. Scommegna, M. Becattini, G. Fontani, L. Paroli, E. Vicario
Quantitative evaluation of software rejuvenation of a pool of service replicas
International Workshop on Software Aging and Rejuvenation (WoSAR), 2024
Cloud-based systems require the management of large volumes of requests while maintaining specific levels of availability and performance. Each service is thus replicated into a pool of identical replicas. This allows for load distribution among the pool of replicas and a greater degree of fault tolerance compared to a single instance of the service that stands as a single point of failure. The high availability and scalability requirements, coupled with the phenomenon of software aging, have made the replica-based approach pervasive in modern online services. In such configurations, the unavailability of a single replica, due to scheduled maintenance or unexpected failures, does not imply the unavailability of the whole system but rather an increase in the load of the remaining replicas. This identifies a performability problem in which the system can tolerate a certain number of offline replicas in the pool. However, once a certain threshold is exceeded, the resulting high workload pending on the online replicas could degrade the performance of the system, potentially leading to a failure in meeting the non-functional requirements. In this work, we study the problem of aging in a pool of service replicas. We characterize two inspection-based rejuvenation strategies that could be implemented in this context, which we identify as uncoordinated and coordinated rejuvenation. We represent them through the formalism of Stochastic Time Petri Nets (STPN) and through steady-state analysis, we conduct a performability evaluation of both the models as the frequency of inspections and the pool size vary.@inproceedings{scommegna2024quantitative, title={Quantitative evaluation of software rejuvenation of a pool of service replicas}, author={Scommegna, Leonardo and Becattini, Marco and Fontani, Giovanni and Paroli, Leonardo and Vicario, Enrico}, booktitle={2024 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)}, year={2024}, organization={IEEE} }R. Verdecchia, L. Scommegna, B. Picano, M. Becattini E. Vicario
Network Digital Twins: A Systematic Review
IEEE Access, 2024
Network management is becoming more complex due to various factors. The growth of IoT increases the number of nodes to control. The combination of Edge and Fog Computing with distributed algorithms makes network synchronization challenging. Softwarized technologies simplify network management but create integration issues with legacy networks. Even in industrial settings where drones and mobile robots are used, proper network management is crucial yet challenging. In this context, digital twins can be used to replicate the structure and behavior of the physical network and at the same time can be used to successfully manage the complexity and heterogeneity of current networks. Despite the rapid growth of interest in the topic, a comprehensive overview of Network Digital Twin research is currently missing. To address this gap, in this paper, we present a systematic review of the Network Digital Twin literature. From the analysis of 138 primary studies, various insights emerge. Networking Digital Twin is a particularly recent concept that has been explored in the literature since 2017 and is experiencing a steady increase to this day. The vast majority of the studies propose solutions to optimize network performance, but there are also many oriented towards other goals such as security and functional suitability. The three most recurrent application domains, as self-reported in the primary studies, are those of smart industry, edge computing, and vehicular. The main research topics aim at network optimization, support for offloading, resource allocation, and floor monitoring, but also support in the implementation of machine learning algorithms such as federated learning. As a conclusion, Networking Digital Twin proves to be a promising emerging field both for academics and practitioners.@article{verdecchia2024network, title={Network Digital Twins: A Systematic Review}, author={Verdecchia, Roberto and Scommegna, Leonardo and Picano, Benedetta and Becattini, Marco and Vicario, Enrico}, journal={IEEE Access}, year={2024} }M. Becattini, L. Carnevali, G. Fontani, L.Paroli, L. Scommegna, M. Masoumi, I. de Miguel, F. Brasca
Dynamic MEC resource management for URLLC in Industry X.0 scenarios: a quantitative approach based on digital twin networks
IEEE International Workshop on Metrology for Industry 4.0 & IoT (MetroInd), 2024
The use of innovative technologies in Industry X.0 scenarios, including, but not limited to, Augmented Reality/Virtual Reality (AR/VR), autonomous robotics, and advanced security systems, requires applicative interconnections between a large number of IoT machines and devices. These interconnections must support Ultra-Reliable and Low Latency Communications (URLLC) to optimize usage and performances of devices related to those new technologies. Notably, the concepts of low latency and reliability are inherently linked; from a device perspective, any service exceeding specific response time thresholds is deemed unresponsive, and thus unreliable. In this paper, we present an innovative approach to quantitatively evaluate reliability in URLLC settings, leveraging the use of Digital Twin Networks (DTN), with a specific focus on Mobile Edge Computing (MEC) and its application to Industry X.0 scenarios. Results obtained so far show the potential for this approach to confer MEC better requests handling capabilities, by providing a near real time re-configuration ability within the MEC itself.@{becattini2024dynamic, title={Dynamic MEC resource management for URLLC in Industry X.0 scenarios: a quantitative approach based on digital twin networks}, author={Becattini, Marco and Carnevali, Laura and Fontani, Giovanni and Paroli, Leonardo and Scommegna, Leonardo and Masoumi, Maryam and de Miguel, Ignacio and Brasca, Fabrizio}, journal={IEEE International Workshop on Metrology for Industry 4.0 & IoT (MetroInd)}, publisher={IEEE}, year={2024} }K. Maggi, R. Verdecchia, L. Scommegna, E. Vicario
CLAIM: a Lightweight Approach to Identify Microservices in Dockerized Environments
International Conference on Evaluation and Assessment in Software Engineering, (EASE), 2024
Background: Over the past decade, microservices have surged in popularity within software engineering. From a research viewpoint, mining studies are frequently employed to assess the evolution of diverse microservice properties. Despite the growing need, a validated static method to swiftly identify microservices seems to be currently missing in the literature. Aims: We present CLAIM, a lightweight static approach that analyzes configuration files to identify microservices in Dockerized environments, specifically designed with mining studies in mind. Method: To validate CLAIM, we conduct an empirical experiment comprising 20 repositories, 160 microservices, and 13k commits. A priori and manually defined ground truths are used to evaluate CLAIM's microservice identification effectiveness and efficiency. Results: CLAIM detects microservices with an accuracy of 82.0%, reports a median execution time of 61ms per commit, and requires in the worst case scenario 125.5s to analyze the history of a repository comprising 1509 commits. With respect to its closest competitor, CLAIM shines most in terms of false positive reduction (-40%). Conclusions: While not able to reconstruct a microservice archi- tecture in its entirety, CLAIM is an effective and efficient option to swiftly identify microservices in Dockerized environments, and seems especially fitted for software evolution mining studies@article{maggi2023claim, title={CLAIM: a Lightweight Approach to Identify Microservices in Dockerized Environments}, author={Maggi, Kevin, and Verdecchia, Roberto, and Scommegna, Leonardo and Vicario, Enrico}, journal={International Conference on Evaluation and Assessment in Software Engineering (EASE)}, publisher={ACM}, year={2024} }B. Picano, R. Reali, L. Scommegna, E. Vicario
Elastic Autoscaling for Distributed Workflows in MEC Networks
Workshop on Cloud Computing Project and Initiatives (CCPI), 2024
With the recent advancements in computing technologies, new paradigms have emerged enabling users to access a large variety of distributed resources, overcoming several limitations of localized applications and information storage. Among these paradigms, Mobile Edge Computing (MEC) places storage and computing capabilities at the edge of the network, significantly decreasing congestion and service response times, at the cost of limited capacities. Within this context, the emergence of novel computationally intensive services has triggered the necessity to design algorithms that adaptively scale resources, achieving solutions tailored to traffic demand. In this paper, we present a preliminary scaling method to determine the resource provisioning of complex workflows of web services that are distributed on a MEC infrastructure, with the intent of improving the distribution of the end-to-end response time of the workflow. The method is designed to run compositionally, exploiting a structured hierarchical workflow representation, enabling efficient top-down determination of the resource provisioning. The method is also formalized to act considering the inherent limitations and complexities of an MEC network landscape. In so doing, we demonstrate the applicability of the approach on two synthetic application scenarios, confirming the validity of the proposed elastic scheme in optimizing resource management within a resource-constrained MEC network@InProceedings{Picano2024Elastic, title="Elastic Autoscaling for Distributed Workflows in MEC Networks", author="Picano, Benedetta and Reali, Riccardo and Scommegna, Leonardo and Vicario, Enrico", booktitle="Advanced Information Networking and Applications", year="2024", publisher="Springer Nature Switzerland", pages="151--160", isbn="978-3-031-57931-8" }A. Botta, R. Canonico, A. Navarro, G. Stanco, G. Ventre, A. Buonocunto, A. Fresa, E. Gentile, L. Scommegna, E. Vicario
Edge to Cloud Network Function Offloading in the ADAPTO Framework
Workshop on Cloud Computing Project and Initiatives (CCPI), 2024
As telcos increasingly adopt cloud-native solutions, classic resource management problems within cloud environments have surfaced. While considerable attention has been directed toward the conventional challenges of dynamically scaling resources to adapt to variable workloads, the 5G promises of Ultra-Reliable Low Latency Communication (URLLC) remain far from being realized. To address this challenge, the current trend leans toward relocating network functions closer to the edge, following the paradigm of Mobile Edge Computing (MEC), or exploring hybrid approaches. The adoption of a hybrid cloud architecture emerges as a solution to alleviate the problem of the lack of resources at the edge by offloading network functions and workload from the Edge Cloud (EC) to the Central Cloud (CC) when edge resources reach their capacity limits. This paper focuses on the dynamic task offloading of network functions from ECs to CCs within cloud architectures in the ADAPTO framework.@inproceedings{Botta2024Edge, title="Edge to Cloud Network Function Offloading in the ADAPTO Framework", author="Botta, Alessio and Canonico, Roberto and Navarro, Annalisa and Stanco, Giovanni and Ventre, Giorgio and Buonocunto, Antonio and Fresa, Antonio and Gentile, Vincenzo and Scommegna, Leonardo and Vicario, Enrico", booktitle="Advanced Information Networking and Applications", year="2024", publisher="Springer Nature Switzerland", pages="69--78", isbn="978-3-031-57931-8" }L. Carnevali, S. Cerboni, B. Picano, L. Scommegna, E. Vicario
An observation metamodel for dependability tools
European Dependable Computing Conference (EDCC), 2024
FaultFlow is a library for modeling and evaluation of dependability of component-based systems. It represents duration to occurrence and propagation of faults across the hierarchy of components through non-Markovian distributions, facilitating fitting of observed data and design assumptions. Additionally, FaultFlow can be extended to simulate the system behavior and generate synthetic time series encoding occurrences of faults and failures and results of diagnostic tests. Time series can in turn be employed to train and test data-driven methods aimed at various tasks, notably failure prediction. As a first step in this direction, we define a flexible and extensible observation metamodel for FaultFlow, representing type and time of observations of the system behavior, and facilitating definition of monitoring policies.@article{carnevali2024observation, title={An observation metamodel for dependability tools}, author={Carnevali, Laura and Cerboni, Stefania and Picano, Benedetta and Scommegna, Leonardo and Vicario, Enrico}, journal={European Dependable Computing Conference (EDCC)}, publisher={Springer}, year={2024} }R. Verdecchia, L. Scommegna, E. Vicario, T. Pecorella
Designing a Future-Proof Reference Architecture for Network Digital Twins
Post-proceedings of the European Conference on Software Architecture, 2024
As the complexity, distribution, and heterogeneity of networks continue to grow, how to architect and monitor of these networking environments is becoming an increasingly critical open issue. Digital twins, which can replicate the structure and behavior of a physical network, are seen as potential solution to address the problem. While reference architectures for digital twins exist in other fields, a comprehensive reference architecture for the networking context has yet to be developed. This paper discusses the need for such a reference architecture and outlines the key elements necessary for its design. We present the findings of a preliminary survey that explores the need for a network digital twin reference architecture, the crucial information it should include, and practical insights into its design. The survey results confirm that existing standards are inadequate for modeling network digital twins, outlining the necessity of a new reference architecture. We then articulate our position on the need for a reference architecture for network digital twins, focusing on three main aspects, namely: (i) digital twins of what, (ii) for what, and (iii) how to deploy them. We then proceed to delineate the fundamental obstacles that a reference architecture must confront, in tandem with the essential characteristics it needs to embody to successfully navigate these challenges. As conclusion, we present our vision for the reference architecture and outline the main research steps we plan to take to address this open problem. Our ultimate goal is to tightly collaborate both with the networking and digital twin software architecture communities to jointly establish a sound network digital twin architecture of the future.@article{verdecchia2024designing, title={Designing a Future-Proof Reference Architecture for Network Digital Twins}, author={Verdecchia, Roberto and Scommegna, Leonardo and Pecorella, Tommaso, and Vicario, Enrico}, journal={Post-proceedings of the European Conference on Software Architecture}, publisher={Springer}, year={2024} }R. Verdecchia, K. Maggi, L. Scommegna, E. Vicario
Technical Debt in Microservices: A Mixed-Method Case Study
Post-proceedings of the European Conference on Software Architecture, 2024
Background: Despite the rising interest of both academia and industry in microservice-based architectures and technical debt, the landscape remains uncharted when it comes to exploring the technical debt evolution in software systems built on this architecture. Aims: This study aims to unravel how technical debt evolves in software-intensive systems that utilize microservice architecture, focusing on (i) the patterns of its evolution, and (ii) the correlation between technical debt and the number of microservices. Method: We employ a mixed-method case study on an application with 13 microservices, 977 commits, and 38k lines of code. Our approach combines repository mining, automated code analysis, and manual inspection. The findings are discussed with the lead developer in a semi-structured interview, followed by a reflexive thematic analysis. Results: Despite periods of no TD growth, TD generally increases over time. TD variations can occur irrespective of microservice count or commit activity. TD and microservice numbers are often correlated. Adding or removing a microservice impacts TD similarly, regardless of existing microservice count. Conclusions: Developers must be cautious about the potential technical debt they might introduce, irrespective of the development activity conducted or the number of microservices involved. Maintaining steady technical debt during prolonged periods of time is possible, but growth, particularly during innovative phases, may be unavoidable. While monitoring technical debt is the key to start managing it, technical debt code analysis tools must be used wisely, as their output always necessitates also a qualitative system understanding to gain the complete picture.@article{verdecchia2024technical, title={Tracing the Footsteps of Technical Debt in Microservices: A Preliminary Case Study}, author={Verdecchia, Roberto and Maggi, Kevin and Scommegna, Leonardo and Vicario, Enrico}, journal={Post-proceedings of the European Conference on Software Architecture}, publisher={Springer}, year={2024} }N. Bertocci, L. Carnevali, L. Scommegna, E. Vicario
Efficient derivation of optimal signal schedules for multimodal intersections
Simulation Modelling Practice and Theory, 2024
Tramways decrease time, cost, and environmental impact of urban transport, while requiring multimodal intersections where trams arriving with nominal periodic timetables may have right of way over road vehicles. Quantitative evaluation of stochastic models enables early exploration and online adaptation of design choices, identifying operational parameters that mitigate impact on road transport performance. We present an efficient analytical approach for offline scheduling of traffic signals at multimodal intersections among road traffic flows and tram lines with right of way, minimizing the maximum expected percentage of queued vehicles of each flow with respect to sequence and duration of phases. To this end, we compute the expected queue size over time of each vehicle flow through a compositional approach, decoupling analyses of tram and road traffic. On the one hand, we define microscopic models of tram traffic, capturing periodic tram departures, bounded delays, and travel times with general (i.e., non-Exponential) distribution with bounded support, open to represent arrival and travel processes estimated from operational data. On the other hand, we define macroscopic models of road transport flows as finite-capacity vacation queues, with general vacation times determined by the transient probability that the intersection is available for vehicles, efficiently evaluating the exact expected queue size over time. We show that the distribution of the expected queue size of each flow at multiples of the hyperperiod, resulting from temporization of nominal tram arrivals and vehicle traffic signals, reaches a steady state within few hyper-periods. Therefore, transient analysis starting from this steady-state distribution and lasting for the hyper-period duration turns out to be sufficient to characterize road transport behavior over time intervals of arbitrary duration. We implemented the proposed approach in the novel OMNIBUS Java library, and we compared against Simulation of Urban MObility (SUMO). Experimental results on case studies of real complexity with time-varying parameters show the approach effectiveness at identifying optimal traffic signal schedules, notably exploring in few minutes hundreds of schedules requiring tens of hours in SUMO.@article{bertocci2024efficient, title={Efficient derivation of optimal signal schedules for multimodal intersections}, author={Bertocci, Nicola and Carnevali, Laura and Scommegna, Leonardo and Vicario, Enrico}, journal={Simulation Modelling Practice and Theory}, pages={102912}, year={2024}, publisher={Elsevier} }L. Scommegna, R. Verdecchia, E. Vicario
Unveiling Faulty User Sequences: A Model-based Approach to Test Three-Tier Software Architectures
Journal of Systems and Software, 2024
Context: When testing three-tiered architectures, strategies often rely on superficial information, e.g., black-box input. However, the correct behavior of software-intensive systems based on such architectural pattern also depends on the logic hidden behind the interface. Verifying the response process is thus often complex and requires ad-hoc strategies. Objective: We propose an approach to identify faults hidden behind the presentation layer. The model-based approach uses an architectural abstraction called managed component Data Flow Graph (mcDFG). The mcDFG is aware of the interactions between all layers of the architecture and guides the generation of tests based on different mcDFG coverage criteria to identify faults in the business logic. Method: To evaluate the approach viability, we consider a three-tiered web application and 32 faults. The fault detection capability is assessed by comparing a set of test suites created by following our method and a set of test suites developed by utilizing traditional testing strategies. Results: The collected data show that the proposed model-based approach is a viable option to identify faults hidden in the logic layer, as it can outperform standard strategies based solely on the presentation layer while keeping the number of test cases and number of interactions per test case low.@article{scommegna2024unveiling, title={Unveiling faulty user sequences: A model-based approach to test three-tier software architectures}, author={Scommegna, Leonardo and Verdecchia, Roberto and Vicario, Enrico}, journal={Journal of Systems and Software}, pages={112015}, year={2024}, publisher={Elsevier} }L. Carnevali, M. Paolieri, B. Picano, R. Reali, L. Scommegna, E. Vicario
A Quantitative Approach to Coordinated Scaling of Resources in Complex Cloud Computing Workflows
European Workshop on Performance Engineering (EPEW), 2023
Resource scaling is widely employed in cloud computing to adapt system operation to internal (i.e., application) and external (i.e., environment) changes. We present a quantitative approach for coordinated vertical scaling of resources in cloud computing workflows, aimed at satisfying an agreed Service Level Objective (SLO) by improving the workflow end-to-end (e2e) response time distribution. Workflows consist of IaaS services running on dedicated clusters, statically reserved before execution. Services are composed through sequence, choice/merge, and balanced split/join blocks, and have generally distributed (i.e., non-Markovian) durations possibly over bounded supports, facilitating fitting of analytical distributions from observed data. Resource allocation is performed through an efficient heuristics guided by the mean makespans of sub-workflows. The heuristics performs a top-down visit of the hierarchy of services, and it exploits an efficient compositional method to derive the response time distribution and the mean makespan of each sub-workflow. Experimental results on a workflow with high concurrency degree appear promising for feasibility and effectiveness of the approach.@inproceedings{carnevali2023quantitative, title={A Quantitative Approach to Coordinated Scaling of Resources in Complex Cloud Computing Workflows}, author={Carnevali, Laura and Paolieri, Marco and Picano, Benedetta and Reali, Riccardo and Scommegna, Leonardo and Vicario, Enrico}, booktitle={European Workshop on Performance Engineering}, pages={309--324}, year={2023}, organization={Springer} }R. Verdecchia, L. Scommegna, E. Vicario, T. Pecorella
Network Digital Twins: Towards a Future Proof Reference Architecture
International Workshop on Digital Twin Architecture (TwinArch), 2023
With the evergrowing popularization of complex, distributed, and heterogeneous networks, how to architect and monitor networking environments is becoming a crucial open problem. In this context, digital twins can be used to mimic the structure and behavior of physical network. Albeit digital twin references architectures exist for other domains, to date, no comprehensive reference architecture for digital twins in the networking context was yet established. In this position paper, we discuss the current need for a reference network digital twin reference architecture, and describe the essential element in the road ahead to design it. We open the paper with the results of a preliminary survey we conducted to investigate the need for the reference architecture, the key information it should convey, and more practical insights on how to design it. Among other results, the survey corroborated that current standards are not best fitted to model network digital twin, and that a new reference architecture is needed. Following, we document our position on the need of a reference architecture for network digital twins. Our discussion is outlined as three main facets, namely (i) digital twins of what, for what, and how to deploy them. As conclusion, we outline our vision on the reference architecture, and the main research steps we plan to undertake to tackle the problem. As end goal, we intend to reach out to both networking and digital twin software architecture communities, towards the joint establishment of a future proof digital twin network architecture.@article{verdecchia2023towards, title={Network Digital Twins: Towards a Future Proof Reference Architecture}, author={Verdecchia, Roberto and Scommegna, Leonardo and Vicario, Enrico and Pecorella, Tommaso}, journal={International Workshop on Digital Twin Architecture (TwinArch)}, publisher={Springer}, year={2023} }R. Verdecchia, K. Maggi, L. Scommegna, E. Vicario
Tracing the Footsteps of Technical Debt in Microservices: A Preliminary Case Study
International Workshop on Quality in Software Architecture, (QUALIFIER), 2023
Background: Albeit the growing academic and industrial interest in microservice architectures and technical debt, to date no study aimed to investigate the evolution characteristics of technical debt in software-intensive systems based on such architecture. Aims: The goal of this study is to understand how technical debt evolves in microservice-based software-intensive systems, in terms of (i) evolution trends, and (ii) relation between technical debt and number of microservices. Method: We adopt a case study based on an application comprising 13 microservices, 977 commits, and 38k lines of code. The research method is based on repository mining and automated source code analysis complemented via manual code inspection. Results: While long periods of development without TD increase are observed, TD overall increases in time. TD variations can happen regardless of the number of microservices and development activity considered in a commit. TD and number of microservices are strongly correlated, albeit not always. Adding (or removing) a microservice has a similar impact on TD regardless of the number of microservices already present in a software-intensive system. Conclusions: Adherence to microservice architecture principles might keep technical debt compartmentalized within microservices and hence more manageable. Developers should pay keen attention to the technical debt they may introduce, regardless of the number of microservice they touch with a commit and the development activity they carry out. Keeping technical debt constant during the evolution of a microservice-based architecture is possible, but the growth of technical debt while a software-intensive systems becomes bigger and more complex might be inevitable.@article{verdecchia2023tracing, title={Tracing the Footsteps of Technical Debt in Microservices: A Preliminary Case Study}, author={Verdecchia, Roberto and Maggi, Kevin and Scommegna, Leonardo and Vicario, Enrico}, journal={International Workshop on Quality in Software Architecture (QUALIFIER)}, publisher={Springer}, year={2023} }B. Picano, L. Scommegna , E. Vicario, R Fantacci
Echo State Learning for User Trajectory Prediction to Minimize Online Game Breaks in 6G Terahertz Networks
Journal of Sensor and Actuator Networks, 2023
Mobile online gaming is constantly growing in popularity and is expected to be one of the most important applications of upcoming sixth generation networks. Nevertheless, it remains challenging for game providers to support it, mainly due to its intrinsic and ever-stricter need for service continuity in the presence of user mobility. In this regard, this paper proposes a machine learning strategy to forecast user channel conditions, aiming at guaranteeing a seamless service whenever a user is involved in a handover, i.e., moving from the coverage area of one base station towards another. In particular, the proposed channel condition prediction approach involves the exploitation of an echo state network, an efficient class of recurrent neural network, that is empowered with a genetic algorithm to perform parameter optimization. The echo state network is applied to improve user decisions regarding the selection of the serving base station, avoiding game breaks as much as possible to lower game lag time. The validity of the proposed framework is confirmed by simulations in comparison to the long short-term memory approach and another alternative method, aimed at thoroughly testing the accuracy of the learning module in forecasting user trajectories and in reducing game breaks or lag time, with a focus on a sixth generation network application scenario.@article{picano2023echo, title={Echo State Learning for User Trajectory Prediction to Minimize Online Game Breaks in 6G Terahertz Networks}, author={Picano, Benedetta and Scommegna, Leonardo and Vicario, Enrico and Fantacci, Romano}, journal={Journal of Sensor and Actuator Networks}, volume={12}, number={4}, pages={58}, year={2023}, publisher={MDPI} }L. Carnevali, M. Paolieri, R. Reali, L. Scommegna, E. Vicario
A Markov Regenerative Model of Software Rejuvenation Beyond the Enabling Restriction
International Workshop on Software Aging and Rejuvenation (WoSAR), 2022
Software rejuvenation is a proactive maintenance technique that counteracts software aging by restarting a system or some of its components. We present a non-Markovian model of software rejuvenation where the underlying stochastic process is a Markov Regenerative Process (MRGP) beyond the enabling restriction, i.e., beyond the restriction of having at most one general (GEN, i.e., non-exponential) timer enabled in each state. The use of multiple concurrent GEN timers allows more accurate fitting of duration distributions from observed statistics (e.g., mean and variance), as well as better model expressiveness, enabling the formulation of mixed rejuvenation strategies that combine time-triggered and event-triggered rejuvenation. We leverage the functions for regenerative analysis based on stochastic state classes of the ORIS tool (through its SIRIO library) to evaluate this class of models and to select the rejuvenation period achieving an optimal tradeoff between two steady-state metrics, availability and undetected failure probability. We also show that, when GEN timers are replaced by exponential timers with the same mean (to satisfy enabling restriction), transient and steady-state are affected, resulting in inaccurate rejuvenation policies.@inproceedings{carnevali2022markov, title={A Markov Regenerative Model of Software Rejuvenation Beyond the Enabling Restriction}, author={Carnevali, Laura and Paolieri, Marco and Reali, Riccardo and Scommegna, Leonardo and Vicario, Enrico}, booktitle={2022 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)}, pages={138--145}, year={2022}, organization={IEEE} }L. Carnevali, M. Paolieri, R. Reali, L. Scommegna, F. Tammaro, E. Vicario
Using the ORIS Tool and the SIRIO Library for Model-Driven Engineering of Quantitative Analytics
European Workshop on Performance Engineering (EPEW), 2022
We present a Model-Driven Engineering (MDE) approach to quantitative evaluation of stochastic models through the ORIS tool and the SIRIO library. As an example, the approach is applied to the case of a tramway line with reduced number of passengers to contain the spread of infection during a pandemic. Specifically, we provide a meta-model for this scenario, where, at each stop, only a certain number of people can ride the tram depending on the current tram capacity, the length of the queue of people waiting at the stop, and the number of passengers on the tram. Then, the ORIS tool and the SIRIO library are used as a software platform to derive a Stochastic Time Petri Net (STPN) representation for each tramway stop and to perform its regenerative transient analysis to obtain quantitative measures of interest, such as the expected number of people waiting at each stop and the expected number of tram passengers over time. Experimental results show that the approach facilitates exploration of the space of design choices, providing insight about the effects of parameter changes on quantitative measures of interest and allowing balanced queue sizes at different stops.@inproceedings{carnevali2022using, title={Using the ORIS Tool and the SIRIO Library for Model-Driven Engineering of Quantitative Analytics}, author={Carnevali, Laura and Paolieri, Marco and Reali, Riccardo and Scommegna, Leonardo and Tammaro, Federico and Vicario, Enrico}, booktitle={European Workshop on Performance Engineering}, pages={200--215}, year={2022}, organization={Springer} }J. Parri, S. Sampietro, L. Scommegna, E. Vicario
Evaluation of software aging in component-based web applications subject to soft errors over time
International Workshop on Software Aging and Rejuvenation (WoSAR), 2021
Modern Web Applications rely on architectures usually designed with modular software components whose behaviour is shaped over fundamental principles and characteristics of the HTTP protocol. Dependency Injection frameworks support designers and developers in the automated management of components lifecycle, binding them to predefined scopes, thus delegating to an outer and independent participant the responsibility of creation, destruction and inter-dependencies definition of runtime instances. In this way, different scopes configurations implicitly act as different software micro-rejuvenation policies, emphasising the importance of choices in the assignment of component scopes; while supporting the stateful behaviour in data-retention mechanism, wider scopes may majorly expose in-memory components to software aging processes. We report a practical experience illustrating how the memory maintained in the business logic of a Web Application may give space to aging processes affecting the runtime behaviour of a stateful web application, and we show how this threat is contrasted by micro-rejuvenation at component level implemented by the container under different assignment strategies for components scopes. To this end, we propose an accelerated testing approach relying on a fault injection process that executes an event-driven simulation of arising soft errors over time. Experimentation on an exemplary web application implemented on the stack of Java Enterprise Edition show how manifestation, correction, and propagation of errors are conditioned by different scopes assigned to components by the software developer.@inproceedings{parri2021evaluation, title={Evaluation of software aging in component-based web applications subject to soft errors over time}, author={Parri, Jacopo and Sampietro, Samuele and Scommegna, Leonardo and Vicario, Enrico}, booktitle={2021 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)}, pages={25--32}, year={2021}, organization={IEEE} }S. Capobianco, L. Scommegna, S. Marinai
Historical handwritten document segmentation by using a weighted loss
Workshop on Artificial Neural Networks for Pattern Recognition (ANNPR), 2018
In this work we propose one deep architecture to identify text and not-text regions in historical handwritten documents. In particular we adopt the U-net architecture in combination with a suitable weighted loss function in order to put more emphasis on most critical areas. We define one weighted map to balance the pixel frequency among classes and to guide the training with local prior rules. In the experiments we evaluate the performance of the U-net architecture and of the weighted training on one benchmark dataset. We obtain good results using global metrics improving global and local classification scores.@inproceedings{capobianco2018historical, title={Historical handwritten document segmentation by using a weighted loss}, author={Capobianco, Samuele and Scommegna, Leonardo and Marinai, Simone}, booktitle={Artificial Neural Networks in Pattern Recognition: 8th IAPR TC3 Workshop, ANNPR 2018, Siena, Italy, September 19--21, 2018, Proceedings 8}, pages={395--406}, year={2018}, organization={Springer} }