Recent years have seen a growing interest among users in the migration of their applications to the Cloud/Fog/Edge computing and Internet-of-Things environments. However, due to high complexity, Cloud/Fog/Edge-based and Internet-of-Things infrastructures need advanced components for supporting applications and advanced management techniques for increasing the efficiency. Adaptivity and autonomous learning abilities become extremely useful to support configuration and dynamic adaptation of these infrastructures to the changing needs of the users as well as to create adaptable applications. This self-adaptation ability is increasingly essential especially for non-expert managers as well as for application designers and developers with limited competences in tools for achieving this ability. Artificial intelligence is a set of techniques which greatly can improve both the creation of applications and the management of these infrastructures. This talk will discuss the use of artificial intelligence in supporting the creation of applications in cloud/fog/edge and IoT infrastructures as well as their use in the various aspects of infrastructure management.
Vincenzo Piuri has received his Ph.D. in computer engineering at Polytechnic of Milan, Italy (1989). He is Full Professor in computer engineering at the University of Milan, Italy (since 2000). He has been Associate Professor at Polytechnic of Milan, Italy and Visiting Professor at the University of Texas at Austin, USA, and visiting researcher at George Mason University, USA.
His main research interests are: artificial intelligence, computational intelligence, intelligent systems, machine learning, pattern analysis and recognition, signal and image processing, biometrics, intelligent measurement systems, industrial applications, digital processing architectures, fault tolerance, cloud computing infrastructures, and internet-of-things. Original results have been published in 400+ papers in international journals, proceedings of international conferences, books, and book chapters.
He is Fellow of the IEEE, Distinguished Scientist of ACM, and Senior Member of INNS. He is IEEE Region 8 Director (2023-24), and has been IEEE Vice President for Technical Activities (2015), IEEE Director, President of the IEEE Systems Council, President of the IEEE Computational Intelligence Society, Vice President for Education of the IEEE Biometrics Council, Vice President for Publications of the IEEE Instrumentation and Measurement Society and the IEEE Systems Council, and Vice President for Membership of the IEEE Computational Intelligence Society.
He has been Editor-in-Chief of the IEEE Systems Journal (2013-19). He is Associate Editor of the IEEE Transactions on Cloud Computing and has been Associate Editor of the IEEE Transactions on Computers, the IEEE Transactions on Neural Networks, the IEEE Transactions on Instrumentation and Measurement, and IEEE Access.
He received the IEEE Instrumentation and Measurement Society Technical Award (2002), the IEEE TAB Hall of Honor (2019), and the Rudolf Kalman Professor Title of the Obuda University, Hungary. He is Honorary Professor at: Obuda University, Hungary; Guangdong University of Petrochemical Technology, China; Northeastern University, China; Muroran Institute of Technology, Japan; Amity University, India; Galgotias University, India; Chandigarh University, India; and BIHER, India.
Vast amounts of healthcare data are now being collected from a wide variety of sources including sensors and mobile phone apps. There is the potential to exploit this to create a more personalised, real-time view of a person’s health, especially if it can be combined with other forms of relevant data, including electronic patient records. However, realising this potential is not easy – it is a multi-disciplinary challenge requiring a holistic approach to engineering that combines sensor design, networking, machine learning, statistics, data engineering, psychology, visualisation, and privacy.
In this talk we will discuss promising current directions in digital healthcare, drawing examples from our experience in building data analytics platforms to store, share and analyse heterogeneous data from large, distributed digital healthcare studies. These directions include real-time analytics, decentralised machine learning, edge computing and trusted research environments. We will also discuss the way in which the design of healthcare services needs to change if they are to take full advantage of the opportunities that data science and AI offer to improve patient outcomes, and healthcare system efficiency.
Professor Paul Watson FREng FBCS CEng is Director of the UK’s National Innovation Centre for Data, Professor of Computer Science at Newcastle University, and a Fellow of the Alan Turing Institute. He began his career at Manchester University before moving to industry to design parallel database servers. In 1995 he joined Newcastle University where his research and teaching has focussed on scalable data engineering. Professor Watson is a Fellow of the Royal Academy of Engineering, a Fellow of the British Computer Society, and a Chartered Engineer. He received the 2014 Microsoft Jim Gray eScience Award.
6G is being developed with the goal of catalyzing the trend started in 5G: that of having wireless infrastructures transforming diverse economy (vertical) sectors. 5G is transforming manufacturing, production, utility and transformation management, and is enhancing our ability to combat climate change. All these will be amplified by 6G; however, this is not all: in addition, consumer applications will be put in the center of attention, transforming our professional, private, and social / citizen lives.
6G aims at equipping applications with advanced features.
These application features will enforce the tight coupling of the digital / virtual and physical worlds, in more facets of our lives, compared to today. Application and service delivery needs to be conducted in accordance with pressing demands from the society: utmost environmental sustainability, digital inclusion (reducing the digital divide), trustworthiness.
These aspects, pose requirements on our infrastructure. Massive twinning requires the transfer of vast amounts of information, in order to maintain the synchronization between the digital and physical worlds. Cobots necessitate the support of ultra-low latency and ultra-high reliability, in a most cost-efficient manner. XR requires the transfer of large volumes of information in real-time, in conjunction with low latency. Sustainability calls for measures that will maximize the reliance on renewable energy and will reduce the volume of permanent” hardware infrastructures. Digital inclusion calls for the highest degrees of cost effectiveness, which can be achieved through adaptability and flexibility. Finally, trustworthiness places requirements not only on the non-functional (performance) behaviour of our infrastructure, but also on the functional elements, e.g., data handling, purpose of data processing, and others.
A wave of multidisciplinary technology innovations / evolutions is needed to satisfy the requirements and the various angles (financial, social). The network will be transformed towards an infrastructure that exposes and offers services “beyond communications”, e.g., computing resources, data, insights, prediction; moreover, higher flexibility levels will become the norm, in line with the goals for sustainability and digital inclusion. In parallel, further spectrum and novel techniques will bring about radio advances, in terms of capacity and coverage; moreover, in certain of these bands “joint communication and sensing” (JCAS) is expected to introduce novel capabilities. The management and orchestration of resources and services will be multi-domain and will call for exposure interfaces and protocols. Last but, by no means least, AI (Artificial Intelligence) will proliferate, e.g., will be present in the service functionality, or will be applied for managing the networks; this will call for advanced means for connecting, supporting, and, above all, governing the hosted
The talk will discuss anticipated applications, elaborate on the social demands, present the multi-disciplinary technology challenges that will need to be addressed for realizing a most exciting era, in which our economies can flourish and our societies can prosper.
Prof. Panagiotis Demestichas is a Professor at the University of Piraeus, Department of Digital Systems, School of ICT, Greece. Currently, he focuses on the development of systems for WINGS ICT Solutions (www.wings-ict-solutions.eu) and its spin-out, ditto (https://ditto-gr.eu/) and Incelligent (www.incelligent.net). WINGS focuses on advanced solutions, leveraging on IoT / 5G / AI / AR, for the environment (air quality), for utilities and infrastructures (water, energy, gas, transportation, construction), for production and manufacturing (aquaculture, agriculture and food safety, logistics and industry 4.0), service sectors (health, security). Incelligent focuses on products for telecommunication infrastructures, banking and for sectors of a digital government. Panagiotis conducts research interests include B5G / 6G, cloud to extreme-edge continuum, IoT solutions, big data management, artificial intelligence, orchestration / diagnostics / intent-oriented mechanisms. He holds a Diploma and a Ph.D. degree on Electrical Engineering from the National Technical University of Athens (NTUA). He holds patents, has published numerous articles and research papers, and is a member of the Association for Computing Machinery (ACM) and a Senior Member of IEEE.
Cancer immunotherapy has been among the most promising breakthroughs in oncology, particularly in the case of immune check point inhibitors, however, the effective response rate remains quite low, only about 20-30%. In this presentation, I will talk about our recent work with molecular modeling and machine learning which solves one mystery behind this low response rate. We found that patients with certain HLA genotype (HLA-B44) have consistently higher survival rate, while patients with some other type (HLA-B15) have much poorer survival rates. Large scale molecular dynamics simulations further reveal that HLA-B15 proteins with poorer therapeutic outcomes had structural appendages that closed over the cancer neoantigens with much less flexibility. HLA typing thus might serve as a useful biomarker in future immunotherapy. Two effective neoantigens Neil3 and Myole are further identified for bladder cancer. The same techniques have also been applied to the design and development of HIV and T1D vaccines, which has been of great interest as well in recent years. With a combined in silico and in vivo approach, we studied the HLA-peptide-TCR interactions from multiple clonotypes specific for a well-defined HIV-1 epitope, and found that effective and ineffective clonotypes bind to the terminal portions of the HLA-peptide through similar salt bridges, but their hydrophobic side-chain packings can be very different, which accounts for the major part of the differences among these clonotypes. Meanwhile, a new x-autoantigen from a dual expressor (X-cell) has been identified for T1D patients, which shows a super potent binding affinity to HLA-DQ8, the main risk allele for T1D. Newly designed autoantigen mutants might serve as HLA-DQ8 blockers or vaccine candidates.
Professor Ruhong Zhou, AAAS Fellow & APS Fellow, is currently Qiushi Chair Professor; Dean, College of Life Sciences; Dean, Shanghai Institute for Advanced Study; and Director, Institute of Quantitative Biology, Zhejiang University, and an Adjunct Professor at Department of Chemistry, Columbia University. Before that, he was a Distinguished Research Staff Scientist, and Head of the Soft Matter Science Department at IBM Research. His main research interests focus on Quantitative Biology; Machine Learning, Deep Learning in Biology; Biophysics; and Bio-Nano Interface (nanomedicine). Dr. Zhou has authored and co-authored more than 300 journal publications (including 32 in Science, Nature, Cell, Nature subjournals, and PNAS) with 24000+ total citations (Google H-index 81), filed 32 international patents and delivered 300+ invited talks at major conferences and universities worldwide. He is part of the IBM BlueGene team that won the 2009 National Medal on Technology (presented by President Obama). He has won the IBM Outstanding Technical Achievement Award (the highest technical award within IBM; 10 times), the IBM Outstanding Innovation Award (twice), and many IBM Research Division Awards. He also won the American Chemical Society DEC Award on Computational Chemistry. He was elected to AAAS Fellow (American Association of Advancement of Science) and APS Fellow (American Physical Society) in 2011. He received his PhD in Biophysical Chemistry from Columbia University, MS in Condensed Matter Physics and BS in Physics both from Zhejiang University.
In this presentation, we will share our recent experience with designing and implementing a practical system for training massive-scale language models with billions of parameters.
Deep Learning is advancing rapidly, and with it, the size of foundation models is increasing exponentially. However, training these models requires significant GPU resources and power, which can be unaffordable for many academic and industry research teams. Even for AI teams in large companies, resources are limited, and purchasing and maintaining these devices can be prohibitively expensive. For instance, training a GPT-3 model requires over thousands of high-performance-configured A100 GPUs for continuous 3 months. Our system, STRONGHOLD, addresses this challenge by offloading model weights to CPU RAM or other secondary storage dynamically and loading them back when needed, minimizing GPU memory requirements. STRONGHOLD also allows data movement and on-GPU computation to overlap to hide the extra overhead introduced by the offloading mechanism. Compared to state-of-the-art offloading-based solutions, STRONGHOLD improves the trainable model size by 1.9x to 6.5x on a 32GB V100 GPU, with 1.2x to 3.7x improvement on the training throughput. We have successfully deployed STRONGHOLD in production to support large-scale DNN training.
Professor Jie Xu is Chair of Computing at the University of Leeds, Director of the UK White Rose Grid e-Science Centre, involving the three White Rose Universities of Leeds, Sheffield and York, and Head of the Distributed Systems and Services (DSS) Theme at Leeds. Xu has worked in the field of Distributed Computing Systems for over thirty-five years, engaging closely with industrial leaders such as Alibaba, BAE Systems, JLR, and Rolls-Royce. He received a PhD in Computing Science from the University of Newcastle upon Tyne, and was Professor of Distributed Systems at the University of Durham before joined Leeds in 2003.
Professor Xu is an executive member of UKCRC (UK Computing Research Committee) and a Turing Fellow in AI and Data Science. He has served as an academic expert for numerous governments and industries, such as Singapore IDA, Lenovo, UK EPSRC, and UK DTI (InnovateUK). In addition, he has extensive editorial experience, having served as an editor for IEEE Distributed Systems from 2000 to 2005, and currently acting as an associate editor of IEEE Transactions on Parallel and Distributed Systems and ACM Computing Surveys. Professor Xu is a Steering Committee member for several prestigious IEEE conferences, such as SRDS, ISORC, HASE, SOSE, JCC, and CISOSE, as well as serving on the executive board of IEEE TC on BIS. He has also been a General Chair/PC Chair for various IEEE international conferences. With over 300 academic publications, including papers in top-ranked IEEE and ACM Transactions, Professor Xu has received international research prizes, such as the BCS/AT&T Brendan Murphy Prize, and led or co-led more than 20 research projects worth over £25M. He is also the co-founder of two university spin-outs that specialize in data analytics and AI software for optimizing data center performance and in co-simulation and digital twins.
Self-sovereign identity (SSI) is a paradigm shift in how digital identities are controlled and shared by individuals and organizations. SSI is promised to be a solution that empowers users with greater privacy, security, and control over their personal data while creating opportunities for new business models and services.
In this talk, the technical components of SSI, such as decentralized identifiers, verifiable credentials, and digital wallets, will be discussed. The way in which these components work together to create a secure, decentralized, and user-centric identity ecosystem will be explored. The privacy implications of SSI will also be examined, including how personal data control is enhanced and selective disclosure of personal information is enabled. Furthermore, the use of SSI in different application domains, for example, education, IoT, and 6G, will be showcased. The importance of collaboration among stakeholders, including industry, policymakers, and the wider community, to realize the full potential of SSI while ensuring privacy, security, and user control will be emphasized.
Professor Axel Küpper, is a highly accomplished computer science professor with over 30 years of experience in distributed systems and mobile networks. He is currently the head of the chair for Service-centric Networking at Technische Universität Berlin (TU Berlin), where he also leads a team at T-Labs, a public-private partnership between Deutsche Telekom AG and TU Berlin. Before joining TU Berlin, Küpper served as an assistant professor at Ludwig-Maximilians-Universität in Munich, where he earned his post-doctoral degree (Habilitation). He holds a degree in computer science and a Ph.D. from RWTH Aachen University.
Küpper’s research has spanned various areas, with his early work focused on context-aware applications and location-based services. In recent years, he has focused his research on cloud computing, service-oriented architectures, web technologies, and decentralized systems and applications. Küpper is a strong advocate for decentralized technologies, including distributed ledger technologies, token economies, self-sovereign identity, emerging blockchain-based applications, blockchain analytics, and decentralized online social networks. He has been involved in over 50 industry and public sector projects, either at an operational or management level. Küpper has also authored or co-authored more than 150 peer-reviewed conference and journal papers and has served as technical chair, general chair, and steering committee member for numerous conferences, including IEEE Compsac, IEEE Cisose, IEEE DAPPS, and BRAINS. In addition, Küpper is the co-founder of a startup company specializing in location-based tracking platforms.
Byzantine fault-tolerant consensus algorithms serve as the bedrock for decentralized applications (dApps) and distributed ledgers, providing the necessary foundations for their correctness and continuity, even in the face of hostile and arbitrary failure conditions. However, more than ever, today’s dApps demand scalability, which traditional consensus algorithms often struggle to achieve. Moreover, emerging applications present new requirements that were previously overlooked in consensus designs.
In this talk, we first provide an introductory overview of consensus, highlighting its significance for decentralized systems. We then address the need for scalability by introducing PrestigeBFT, a new algorithm that incorporates a reputation mechanism for each consensus-seeking node. Our design demonstrates an impressive 5X performance improvement over contemporaries.
Furthermore, we introduce VGuard, a novel distributed ledger and consensus algorithm, explicitly designed to support dynamic membership changes. This feature is particularly vital for dApps catering to vehicular networking or low-orbit satellite scenarios, where the set of agreement-seeking nodes experiences frequent fluctuations. VGuard’s peak throughput is up to 22X higher than that of popular contemporaries.
Professor Hans-Arno Jacobsen, holds the Jeffrey Skoll Chair in Computer Networking and Innovation at the Sr. Rogers Department of Electrical and Computer Engineering, University of Toronto, where he is a professor of Computer Engineering and Computer Science. His pioneering research lies at the intersection of distributed systems, data management and data science, with particular focus on blockchains, (complex) event processing, and cyber-physical systems. Most recently, he has become interested in quantum computing where, to this end, he is working on applications in molecular property prediction (computational chemistry) and quantum machine learning. Arno is a Fellow of the IEEE.
Supply-chain finance includes a set of software tools, often referred to as supply-chain accounts-receivable pledge financing, which is used to optimise cash flow among suppliers and buyers in order to create a cooperative enterprise from which all parties involved can profit. The participants develop trust through this method. Fraudsters, however, use this to obtain illegal advantages. The detection of fraud has seen extensive application of graph neural networks. To start, we tackle this problem by proposing a new GNN-based AI. However, the camouflaged behaviours of these fraudsters lead to a high heterophilic ratio of fraud graphs. The proposed GNN-based method follows the homophilic assumption or increases the homophilic ratio by attention and filtering mechanisms. The method leads to the attenuation of fraudulent information in the message-passing process and hampers the performance of fraud detection. To tackle this issue, we develop another novel feature-separated graph neural network for fraud detection on graphs with heterophily. An aggregation method is designed to aggregate separated features from neighbours with an edge classification module that predicts the similarity between connected nodes. Based on extensive experiments, it is shown that we can effectively detect fraud.
Dr Kuo-Ming Chao obtained his MSc and PhD degrees from Sunderland University, UK. He is currently affiliated with the University of Roehampton, Bournemouth University, and the National Engineering Laboratory for E-Commerce Technologies (NELECT), Fudan University. Prior to that, he worked at Coventry University and at the Engineering Design Centre at Newcastle-upon-Tyne University. Between 2007 and 2008, he joined the British Telecom Research Lab as a short-term research fellow. His research interests include the areas of intelligent agents, service-oriented computing, cloud/fog computing, and machine learning, as well as their applications in areas such as fintech, e-business, advanced manufacturing, and energy efficiency management. He has over 200 refereed publications in books, journals, and conference proceedings. He has been actively involved in conferences and workshops as a programme/general/steering conference chair and in numerous conferences and workshops as a programme committee member. Currently, he is chairing the IEEE Technical Community on Business Informatics and Systems. He is a co-founder and managing editor of Service-Oriented Computing and Applications: A Springer Journal to promote service-oriented computing and is additionally a member of editorial boards for several international journals. Besides, he is involved in many EU-funded projects as a coordinator or work package leader.
Meaningful applications in Research, Life Sciences and Health require a big data infrastructure where organizations and users can operate on top of knowledge graphs that link together many large heterogeneous data sources, describe entities and relationships, and quantify uncertainty. In this talk three examples of large knowledge graphs will be given, which have been produce by Elsevier, and that can serve with downstream applications the research scientific communities for queries on scientific impact, funding and other scientometric and bibliometric aspects, as well as scientists, researchers and professionals in health and life sciences, and engineering. State-of-the-art research conducted in close collaboration with several academic institutions in the area of complex query answering on large knowledge graphs will also be presented, with emphasis on addressing the issues of missing links, and answering efficiently complex queries on top of incomplete graphs. Throughout the talk, examples will be given of how these technologies serve the scientific communities addressing real use cases, by being utilized in high throughput Web applications and platforms.
Dr. George Tsatsaronis is Vice President of Data Science at the Operations division of Elsevier, in Amsterdam, The Netherlands. Prior to joining Elsevier in 2016 he worked in academia for more than 10 years, doing research and teaching in the fields of machine learning, natural language processing and bioinformatics in universities in Greece, Norway and Germany. He has published more than 60 scientific articles in high impact peer review journals and conference proceedings in various areas of Artificial Intelligence, primarily natural language processing and text mining. In Elsevier, Dr. George Tsatsaronis is responsible for the design, implementation, deployment and quality assurance for several of Elsevier’s machine learning solutions and capabilities.
In Internet of Things (IoT), availability of devices, reliability of communication, Quality of Service (QoS), and security are all essential for the good functioning of applications. Over the time, the state of devices and the overall network may depreciate. This is due to the challenging and failure-prone nature of IoT; consisting of a huge number of heterogeneous and resource-constrained things in terms of memory, communication, energy and computational capabilities. To ensure robustness, monitoring the network state, performance and functioning of the nodes and links is crucial, especially for critical applications. Safety-critical applications, such as a distributed fire- or burglar-alarm system, require that all sensor nodes are up and functional. Monitoring is also important to measure the trust level of nodes when collaboration is needed.
In this talk, we will introduce all the concept above. So, we will first introduce the Internet of Things, its challenges and the monitoring concept. We will present the Research motivations and objectives for the monitoring. After presenting the stat-of-the-art research on monitoring, we will present our theoretical solutions for monitoring IoT: heuristic, exact and distributed/dynamic solutions. We finish the talk with some future directions.
Professor Abderrahim Benslimane, is Full Professor of Computer-Science at the Avignon University/France since 2001. He is Vice Dean of the Faculty of Sciences and Technology. He has been nominated in 2020 and in 2022 as IEEE VTS Distinguished Lecturer. He has the French award for Doctoral supervision and Research 2022-2025. He has been an International Expert at the French Ministry of Foreign and European affairs (2012-2016). He served as a coordinator of the Faculty of Engineering and head of the Research Center in Informatics at the French University in Egypt.
He has been nominated IEEE ComSoc Steering chair of Multimedia TC 2022-2024. He is past Chair of the ComSoc Technical Committee of Communication and Information Security 2017-2019. Currently, he is EiC of Inderscience Int. J. of Multimedia Intelligence and Security (IJMIS), Editorial Board of IEEE IoT journal and editorial member of IEEE Transaction on Multimedia, IEEE Wireless Communication Magazine, IEEE System Journal, Elsevier Ad Hoc Networks, and Springer Wireless Network Journal.
He is co-founder and serves as General-Chair/Steering chair of the IEEE WiMob since 2005. He served as General Chair of IEEE CNS 2020, Executive Forum Co-Chair at IEEE Globecom 2020, Program vice Chair of IEEE TrustCom 2020 and iThings 2020, Symposium co-chair/leader in many IEEE international conferences such as ICC, Globecom, iCoST, MOWNET, AINA and VTC.
He has more than 280 refereed international publications (books, conference proceedings, journals and conferences) and more than 20 Special issues. He supervised more than 20 Ph.D thesis and more than 42 M.Sc. research thesis.
Data becomes more important in scientific research. The Fourth Paradigm is a concept that focuses on how science can be advanced by sharing data. Several activities have been started at regional and national levels to accelerate open science in the global scope.
European Open Science Cloud is the typical infrastructure organized under the Horizon Europe funding schema. Similar projects aim to develop shared services for managing and sharing research data in Africa, Australia, Malaysia, South Korea, and Japan. This talk introduces a recent movement around research infrastructure, followed by our current development in Japan.
Professor Kazu Yamaji received his Ph.D. degree in Systems and Information Engineering from the Toyohashi University of Technology, Japan, in 2000. Currently he is a professor and the director of research center for open science and data platform at the National Institute of Informatics (NII), Japan. His primary research interests include modeling and developing trusted e-science space in order to share and reuse research materials.
Digital and data-intensive science requires infrastructure that covers a wide variety of infrastructure components. Pinpointed as the “Web of FAIR data and related services,” the European Open Science Cloud is rapidly becoming Europe’s IT infrastructure and is being implemented to support millions of researchers in all aspects of their research life cycle, including discovering and accessing resources, computing and analysing data, and publishing in a collaborative manner. This presentation will focus on the efforts being made to converge different views and local architectures, while highlighting similarities and differences with the ongoing efforts of the European Data Spaces infrastructure.
Natalia Manola is the CEO of OpenAIRE AMKE (www.openaire.eu), a non-profit pan European e-Infrastructure supporting scholarly communication and open science in Europe. Natalia holds a Physics degree from the University of Athens, and an MS in Electrical and Computing Engineering from the University of Wisconsin at Madison and has worked for several years as a Software Engineer and Architect in the Bioinformatics commercial sector. She has expertise in Open Science policies and implementation, having served in the EOSC Executive Board 2019-20, and in the Open Science Policy Platform (2016-17), an EC High Level Advisory Group provide advice about the development and implementation of open science policy in Europe. Her research interests include the topics of e-Infrastructures development and management, scientific data management, data curation and validation, AI-driven research analytics.
The “software crises”, identified in the late 1960s, has been answered with new Programming languages (ALGOL 68, PL1, Pascal, SIMULA, Java, C++, Python), and new programming paradigms such as object orientation and service orientation. All these proposals essentially focus the implementation perspective, not the perspective of the system to be built, the problem to be solved, the persons involved, etc. Attempts to adjust this bias include modeling techniques such as (many versions of) automata, Petri nets, state charts, UML, BPMN, MSC, EPC, etc. But Modeling never became THE general starting point for developing software embedded systems.
The rise of cyber-physical systems urges for comprehensive modeling infrastructures, bridging the gap from informal ideas to formal description of integrated behavior of institutions, stakeholders, and software. We suggest such a modeling infrastructure, including universal composition and refinement of modules, representation of real world items and data, and locally confined dynamic steps.
Professor Wolfgang Reisig is a professor emeritus of the Computer Science Institute of Humboldt-Universitaet zu Berlin, Germany. He served as a visiting professor at Hamburg University, a project manager at Gesellschaft fuer Mathematik und Datenverarbeitung (GMD), and a professor at Technical University of Munich. Prof. Reisig was a senior research at the International Computer Science Institute (ICSI) in Berkeley, California in 1997, got the “Lady Davis Visiting Professorship” at the Technion, Haifa (Israel), the Beta Chair of Technical University of Eindhoven, and twice received an IBM Faculty Award for his contribution to Cross-organizational Business Processes and the Analysis of Service Models. He has been the speaker of a PhD school on Service Oriented Architectures, 2010 – 2017. Prof. Reisig is a member of the European Academy of Sciences, Academia Europaea. He published and edited numerous books and articles on Petri Net Theory and Applications. He is a Member of the Petri Net Conference Steering Committee since 1982 and a co-editor of the journal “Software and Systems Modeling”.