Fundamentals of AI Testing
Fundamentals of AI Testing is a introductory course to Artificial Intelligence (AI), and gives you a broad insight into the AI methods used to test AI-based solutions and how AI-based solutions can be used to test other IT-systems.
Learning objective
This fundamental course to AI, Artificial Intelligence, gives you a broad insight into the AI methods used to test AI-based solutions and how AI-based solutions can be used to test other IT-systems. Furthermore the following outcome:
- Understand the current state and expected trends of AI
- Experience the implementation and testing of a ML model and recognize where testers can best influence its quality
- Understand the challenges associated with testing AI-Based systems, such as their self-learning capabilities, bias, ethics, complexity, non-determinism, transparency and explainability
- Contribute to the test strategy for an AI-Based system
- Design and execute test cases for AI-based systems
- Recognize the special requirements for the test infrastructure to support the testing of AI-based systems
- Understand how AI can be used to support software testing
Target audience
The course is aimed at people who are seeking to extend their understanding of artificial intelligence and/or deep (machine) learning, most specifically testing AI based systems and using AI to test. It could be the following profiles:
- Testers and QA Engineers
- Test Managers
- Data Scientist
- Developers
- Project Managers / Scrum Masters / Product Owners
Prerequisites
There are no official prerequisites to attend the course, but it’s a good idea to have basic knowledge and understanding of the following areas:
- Programming language – Java/Python/R
- Statistics
- Experience with software development and testing
Form
The course contains both a theoretical review, practical exercises, and discussion. There will be a high degree of participant involvement.
Course content
The course covers the following subjects:
1.Introduction to AI
- 1Definition of AI and AI Effect
- Narrow, General and Super AI
- AI-Based and Conventional Systems
- AI Technologies
- AI Development Frameworks
- Hardware for AI-Based Systems
2.Machine Learning (ML) – Overview
- Forms of ML
- ML Workflow
- Selecting a Form of ML
3.ML – Data
- Data Preparation as Part of the ML Workflow
- Training, Validation and Test Datasets in the ML Workflow
- Dataset Quality Issues
4. ML Functional Performance Metrics
- Confusion Matrix – hands-on exercise
5.ML – Neural Networks and Testing
- Neural Networks
- Coverage Measures for Neural Networks
6.Testing AI-Specific Quality Characteristics
- Challenges Testing Complex AI-Based Systems
- Testing the Transparency, Interpretability and Explainability of AI-Based Systems
- Test Oracles for AI-Based Systems
- Testing for Concept Drift
- Selecting a Test Approach for an ML System
- Test Objectives and Acceptance Criteria
- Back-to-Back Testing
- A/B Testing
- Hands-On Exercise: Metamorphic Testing
- Experience-Based Testing of AI-Based Systems
7.Using AI for Testing
- Using AI for Defect Prediction
- Using AI for Testing User Interfaces
Do you have any questions please contact
- Charlotte Heimann
- Seniorspecialist
- +45 72203147