IEEE CISOSE 2023

IEEE CISOSE2023 – Tutorial on Testing and Automation for Intelligent Chatbot Systems

Speaker: Jerry Gao, Ph.D. Professor, San Jose State University, USA

Athens, Greece, 7/17/2023

Jerry Gao, Professor, Computer Engineering Department and Applied Data Science Department, San Jose State University

Background:
According to the recent market analysis by POLARIS Market Research (at
https://www.polarismarketresearch.com/industry-analysis/automation-testing-market, the global automation testing market was valued at USD 20.70 billion in 2021 and is expected to grow at a CAGR of 19.0% during the forecast period. With fast advance of machine learning models and AI technologies, more and more smart machine based intelligent systems and applications, and chatbots are developed for real life deployment and applications. Before the deployment of these intelligent systems, it is important and critical for intelligent system testers, quality assurance engineers, and young generation to understand the issues, challenges, and needs as well as state-of-the-art AI testing tools and solutions in testing and quality assurance for modern intelligent systems and smart mobile apps, and smart machines (smart Robots, driverless AVs, and intelligent UAVs). With the big heat of ChatGTP in the business market, many people started to pay the attention on the quality of AI applications systems and deployment.

Why quality AI testing and automation is import?
Current and future intelligent systems will be developed by big data training and powered by AI and machine learning models. These intelligent features and functions bring new issues and challenges to today’s software testing and quality assurance teams due to the following reasons:

  1. Hard to establish quality test requirements.
    • Lack of well-defined quality testing and assurance programs and standards
    • Most current QA and testers not well-trained on AI and machine learning.
    • Lack of well-defined AI system requirements analysis and modeling methods
  2. Not easy define adequate test sets
    • Lack of well-defined test models and methods to help testers and QA engineers to define and select adequate test sets because most AI-powered system functions (or components) are trained based big data using diverse machine learning.
    • Most existing software system test methods were developed for conventional
      software without considering special features and needs in intelligent system.
  3. Where is the adequate test coverage for intelligent system functions/features? and when to stop testing?
    • Lack of well-defined intelligent system adequate test coverage and quality
      assurance programs and standards
    • Continuous learning features of modern intelligent systems brings new challenge
      and needs.
  4. How to validate large-scale test results in automatic ways?

    • AI-based functions may bring uncertainty in system results.

    • The highly diversity of test results and system outputs bring the new challenges and needs in test automation.

  5. Hard to find the test tools supporting rich-media input/outputs

    • AI-powered intelligent systems usually accept multiple-mode inputs in text, image, audio, and video.

    • Current software testing tools and solutions do not support rich media input data, and rich media output data validation.

Who should attend this tutorial?
Test engineers, quality assurance engineers, and managers who are responsible for quality testing and assurance for modern intelligent systems and AI-powered smart mobile and online applications, such as smart chatbot systems. Researchers and students are interested in AI system testing and quality assurance.
What you learned from this tutorial? What is the coverage of this tutorial?

Table of contents (outline):

  • Introduction
  • What to test for intelligent chatbot systems?
  • Quality testing process and methods
  • AI test modeling for intelligent chatbot systems
  • Test generation and data augmentation for intelligent chatbot systems
  • Test result validation for intelligent chatbot systems
  • Test automation for intelligent chatbot systems
  • Quality evaluation metrics and test coverage
Short Bio

Dr. Jerry Gao is a professor at San Jose State University for Computer Engineering Department and Applied Data Science Department. Now, his research interest includes Smart Machine Cloud Computing and AI, Smart Cities, Green Energy Cloud and AI Services, and AI Test Automation, Big Data Cyber Systems and Intelligence. He has published three technical books and over hundreds (300) publications in IEEE/ACM journals, magazines, international conferences. His research work has received over 8.2K+ citations (in Google Scholar), and reached to over 300K readings on ResearchGate. Since 2020, Dr. Gao has served as the chair of the steering committee board for IEEE International Congress on Intelligent Service-Oriented Systems Engineering (IEEECISOSE), and Steering Committee Board for IEEE Smart World Congress. He had over 25 years of academic research and teaching experience and over 10 years of industry working and management experience on software engineering and IT development applications.

Dr. Gao and his group has published over 12 research papers in AI Testing and Automation for modern intelligent systems. Since 2019, Dr. Gao works with Dr. Hong Zhu to establish IEEE AITest international conferences, and successful delivered annually from 2019 to 2023.

In last 10 years, Dr. Gao has played as one key organizer for several IEEE international conferences and workshops, including IEEE CISOSE2021-2023, IEEEAITest2021, IEEE BigDataService2020, IEEE Smart World Congress 2017, IEEE Smart City Innovation 2017, SEKE2010-2011, IEEE MobileCloud2013, and IEEE SOSE2010-2011.

Jerry Gao’s Google Scholar: https://scholar.google.com/citations?user=vMi9grgAAAAJ&hl=en

Jerry Gao’s ResearchGate: https://www.researchgate.net/profile/Jerry-Gao

IEEE CISOSE2023 – Tutorial on Domain Generalization

Speaker: Christos Diou, Ph.D., Assistant Professor, Harokopio University of Athens, Greece

Athens, Greece, 7/17/2023

Christos Diou, Ph.D. Assistant Professor, Dept. of Informatics and Telematics, Harokopio University of Athens

Abstract:
Statistical machine learning assumes that both training and unseen (test) data samples are independent and identically distributed (i.i.d.). Violation of this fundamental assumption can lead to poor generalization i.e., significant gap in model effectiveness between the training and test sets. Unfortunately, this problem is prevalent in practice, where the i.i.d. assumption often does not hold, hindering the adoption and use of machine learning models in real-world applications. Domain generalization methods attribute poor generalization to the fact that models learn spurious, domain-specific features instead of class-specific characteristics. When the model is applied to different, unseen domains in the test set, it fails to work as expected. This tutorial aims at providing a brief introduction to the domain generalization (DG) problem. We will define DG, provide an overview of the most important recent methods proposed in the bibliography and will explore commonly used datasets and benchmarks, including recently proposed benchmarks in the healthcare setting. Through this tutorial, researchers will be provided with the necessary introductory context and knowledge to start working on this important, emerging problem. 

More information and slides for this tutorial are available at https://bds-dgtutorial.github.io/

Short Bio

Dr. Christos Diou is an Assistant Professor of Artificial Intelligence and Machine Learning at the Department of Informatics and Telematics, Harokopio University of Athens. He received his Diploma in Electrical and Computer Engineering and his PhD in Analysis of Multimedia with Machine Learning from the Aristotle University of Thessaloniki. He has co-authored over 80 publications in international scientific journals and conferences and is the co-inventor in 1 patent. His recent research interests include robust machine learning algorithms that generalize well, the interpretability of machine learning models, as well as the development of machine learning models for the estimation of causal effects from observational data. He has over 15 years of experience participating and leading European and national research projects, focusing on applications of artificial intelligence in healthcare.

IEEE CISOSE2023 – Tutorial on Enriching Development Platforms for Function as a Service Frameworks

Speaker: George Kousiouris, Ph.D., Assistant Professor, Harokopio University of Athens, Greece

Athens, Greece, 7/17/2023

George Kousiouris, Assistant Professor, Dept. of Informatics and Telematics, Harokopio University of Athens

Abstract:
Function as a Service has enabled modular development as well as cloud-native implementation of applications, abstracting issues such as elasticity and systems management due to its serverless approach. Breaking down applications into functions can lead to a more fine grained development while its on-demand execution can significantly reduce costs. However it also introduces an increased difficulty in managing the entire application, grouping functions to create higher building blocks and create large workflows of needed combinatorial logic.
In this talk we will introduce the H2020 PHYSICS FaaS Design Environment and present various issues around function and workflow development, link to internal DevOps processes as well as aspects of workflow and function concurrency overheads that appear. The use of visual flow-based programming techniques will be presented as a mean to enhance the process of FaaS application creation. Pattern-based development will be discussed, along with a set of indicative pattern prototypes for the FaaS domain and beyond. Challenges that appear from the use of asynchronous APIs and an even more distributed function execution in combination with FaaS limitations will be identified. Performance issues investigation will be highlighted as well as means to annotate functions for easier cloud/edge tradeoffs and placement considerations, as well as embed other needs for semantics and annotations for a given function. Different function orchestration means as well as their benefits and drawbacks will be analyzed. Finally, applications of the specific process in various scenarios such as smart agriculture and ehealth cases will be discussed.

Short Bio

Dr. George Kousiouris is an Assistant Professor at the Department of Informatics and Telematics of Harokopio University of Athens. He received his Dipl. Eng. in Electrical and Computer Engineering from the University of Patras, Greece in 2005 and his Ph.D. in Cloud Computing at the Telecommunications Laboratory of the Dept. of Electrical and Computer Engineering of the National Technical University of Athens in 2012. He has participated in numerous EU funded projects such as H2020 PHYSICS, H2020 BigDataStack, H2020  CloudPerfect, H2020 SLALOM, FP7 COSMOS , FP7 ARTIST, FP7 OPTIMIS , FP7 IRMOS and national projects. His interests are mainly Cloud services, performance evaluation and benchmarking, Service Level Agreements, IoT Platforms and service-oriented architectures.

IEEE CISOSE2023 – Tutorial on The Datamorphic Testing Methodology: Principles, Tools and Applications to Machine Learning

Speaker: Prof. Hong Zhu, Oxford Brookes University, Oxford, UK,

Athens, Greece, 7/17/2023

Hong Zhu, Professor, School of Engineering, Computing and Mathematics, Oxford Brookes University, Oxford

Abstract:
Datamorphic testing methodology regards software testing as an engineering process in which a test system is developed, maintained, evolved and operated to achieve software testing purposes. It defines software test systems as consists of a set of test entities and test morphisms, where the former are the objects, data, documents etc created, used and managed during testing process while the latter are the operators and transformers on the test entities. Typical examples of test morphisms include test case generators, data augmentations (which are called datamorphisms in datamorphic testing and play a significant role), test oracles (which are called metamorphisms), test adequacy metrics, etc. One of the most important principles of datamorphic testing methodology is that a test system should be explicitly defined and implemented (especially when testing is complicated and expensive) so that testing can be ensured of high quality and conducted effectively and efficiently, and also testing resources represented and embodied in test systems can be reused and efficiently evolved as valuable resources. Research demonstrated that when such a test system is implemented and maintained effectively, test automation can be achieved at three different abstract levels. At activity level, testing actions can be performed by invoking test morphisms. At the strategy level, test strategies can be formally defined as algorithms with test entities and test morphisms as parameters and applied via invoking the corresponding algorithms. At process level, the activities and the application of strategies can be recorded to form test scripts, which can be edited and replayed. Since such test scripts are at a higher level of abstraction than traditional test scripts, they are more reusable and less fragile to modification to the software under test. The principles of the datamorphic testing methodology has been applied to a number of testing problems for machine learning applications, including confirmatory testing of ML models, exploratory testing of ML classifiers, and to scenario-based functional testing for improve ML model performances. The tutorial will consist of three parts: (1) the principles and basic concepts of datamorphic testing methodology, (2) the automated testing tool and test environment Morphy that supports datamorphic testing methodology, and (3) the applications to machine learning models.

Short Bio

Dr. Hong Zhu is a professor of computer science at the Oxford Brookes University, Oxford, UK, where he chairs the Cloud Computing and Cybersecurity Research Group. He obtained his BSc, MSc and PhD degrees in Computer Science from Nanjing University, China, in 1982, 1984 and 1987, respectively. He was a faculty member of Nanjing University from 1987 to 1998. He joined Oxford Brookes University in November 1998 as a senior lecturer in computing and became a professor in Oct. 2004. His research interests are in the area of software development methodologies, including software engineering for cloud computing and software engineering of intelligent systems, formal methods, software design, programming languages and automated tools, software modelling and testing. He has published 2 books and more than 190 research papers in journals and international conferences. He is a senior member of IEEE, a member of British Computer Society, and ACM.