Back

Tutorials

Jerry Gao, Professor

San Jose State University, USA
Computer Engineering Department and Applied Data Science Department, San Jose State University
Director of Research Center of Smart Technology and Systems
Co-Funder and CTO of ALPS-Touchtone, Inc.

Dr. Jerry Gao is a professor at San Jose State University for Computer Engineering Department and Applied Data Science Department. Now, his research interest includes Smart Machine Cloud Computing and AI, Smart Cities, Green Energy Cloud and AI Services, and AI Test Automation, Big Data Cyber Systems and Intelligence. He has published three technical books, one of the books is the first book on object-oriented software testing (1998), and his second book is titled as Testing and Quality Assurance for Component-based Software, which is the first book on component-based software systems.  hundreds (320) publications in IEEE/ACM journals, magazines, international conferences. His research work has received over 88K+ citations (in Google Scholar), and reached to over 330K+ readings on ResearchGate. Since 2020, Dr. Gao has served as the chair of the steering committee board for IEEE International Congress on Intelligent Service-Oriented Systems Engineering (IEEECISOSE), and Steering Committee Board for IEEE Smart World Congress. He had over 25 years of academic research and teaching experience and over 10 years of industry working and management experience on software engineering and IT development applications.

Testing and Automation for Intelligent Computer Vision and Applications

Background:
According to the recent market analysis by Global Market Insight (GMI), the global automation testing market size will be anticipated to cross USD 80 billion in 2032. With the fast advances in machine learning models and AI technologies, more and more intelligent systems and applications, including smart computer vision systems, are being developed for real deployment and applications.

Before the deployment of these intelligent systems, it is important and critical for intelligent system testers, quality assurance engineers, and young generations to understand the issues, challenges, and needs as well as state-of-the-art AI testing tools and solutions in testing and quality assurance for modern intelligent systems and smart mobile apps, and smart machines (smart Robots, driverless AVs, and intelligent UAVs). With the big heat of ChatGTP in the business market, many people have started to pay attention to the quality of AI applications systems and deployment.

Why quality AI testing and automation of Computer Vision is import?
Today, many intelligent computer vision systems have been trained based on computer vision big data and developed using data-driven computer vision models. There are two types of computer vision data: a) object-oriented computer vision photos and b) document-based images. Testing engineers and quality assurance people have encountered many challenges in testing and automation of computer vision systems and applications: intelligent features and AI-powered functions bring new issues and challenges to testing intelligent computer vision applications and quality assurance due to the following reasons:

How to establish test requirements, validation models, and quality assurance standards for computer vision systems/applications?
• Lack of well-defined quality testing and analysis models and quality requirement specification approaches.
• Lack of well-defined quality assurance standards for computer vision system analysis and modeling methods

Where are the cost-effective quality validation methods for computer vision systems/applications?
• Current software validation methods are not good enough to support computer vision systems because these methods are not designed to address the demands and needs of computer vision systems.
• There is a lack of well-defined quality validation methods for computer vision systems.

High costs to define and generate adequate test sets for computer vision
• Lack of well-defined test models and methods to help testers and QA engineers to define and select adequate test sets because most AI-powered system functions (or components) are trained based on big data using diverse machine learning.
• Most existing software system test methods were developed for conventional software without considering special features and needs in intelligent systems.

Hong Zhu, Professor

School of Engineering, Computing and Mathematics
Oxford Brookes University, UK

Dr. Hong Zhu is a professor of computer science at the Oxford Brookes University, Oxford, UK, where he chairs the Cloud Computing and Cybersecurity Research Group. He obtained his BSc, MSc and PhD degrees in Computer Science from Nanjing University, China, in 1982, 1984 and 1987, respectively. He was a faculty member of Nanjing University from 1987 to 1998. He joined Oxford Brookes University in November 1998. His research interests are in software development methodologies, including software engineering for cloud native applications and intelligent systems, software design, programming languages and automated tools, software modelling and testing. He has published 2 books and more than 200 research papers in journals and international conferences. He is a senior member of IEEE, a member of British Computer Society, and ACM.

Datamorphic Testing: Principles, Tools and Applications to Machine Learning

Datamorphic testing methodology regards software testing as a system engineering process in which a test system is developed, maintained, evolved and operated to achieve software testing purposes. It defines software test systems as consisting of a set of test entities and test morphisms, where the former are the objects, data, documents etc created, used and managed during testing process while the latter are the operators and transformers on the test entities. Typical examples of test morphisms include test case generators, data augmentations (which are called datamorphisms), test oracles (which are called metamorphisms), test adequacy metrics, etc. One of the most important principles of datamorphic testing methodology is that a test system should be explicitly defined and implemented, especially when testing is complicated and expensive. The principles of the datamorphic testing methodology has been applied to a number of testing problems for machine learning applications, including confirmatory testing of ML models such as face recognition and object identification in autonomous vehicle’s perception, exploratory testing of ML classifiers such as for evaluation of robustness and adversarial attacks, and to scenario-based functional testing for improve ML model performances.


In this tutorial, with examples of testing machine learning applications, we will learn how to develop such test system to ensure testing to be of high quality and conducted effectively and efficiently. In particular, we will learn how to represent and implement testing resources in test systems with the support of an automated datamorphic testing tool Morphy. We will also demonstrate how to achieve test automation using Morphy at three different abstract levels. At activity level, testing actions can be performed by invoking test morphisms. At the strategy level, test strategies can be formally defined as algorithms with test entities and test morphisms as parameters and applied via invoking the corresponding algorithms. At process level, the activities and the application of strategies can be recorded to form test scripts, which can be edited and replayed. Since such test scripts are at a higher level of abstraction than traditional test scripts, they are more reusable and less fragile to modification to the software under test.


The tutorial will consist of three parts: (1) the principles and basic concepts of datamorphic testing methodology, (2) the automated testing tool and test environment Morphy that supports datamorphic testing methodology, and (3) the applications to machine learning models.