Download ISTQB.CT-AI.VCEplus.2025-02-16.28q.tqb

Download Exam

File Info

Exam Certified Tester AI Testing
Number CT-AI
File Name ISTQB.CT-AI.VCEplus.2025-02-16.28q.tqb
Size 194 KB
Posted Feb 16, 2025
Download ISTQB.CT-AI.VCEplus.2025-02-16.28q.tqb

How to open VCEX & EXAM Files?

Files with VCEX & EXAM extensions can be opened by ProfExam Simulator.

Purchase

Coupon: MASTEREXAM
With discount: 20%



Exam Hub discount


Demo Questions

Question 1

Which ONE of the following is the BEST option to optimize the regression test selection and prevent the regression suite from growing large?


  1. Identifying suitable tests by looking at the complexity of the test cases.
  2. Using of a random subset of tests.
  3. Automating test scripts using Al-based test automation tools.
  4. Using an Al-based tool to optimize the regression test suite by analyzing past test results
Correct answer: D
Explanation:
A . Identifying suitable tests by looking at the complexity of the test cases.While complexity analysis can help in selecting important test cases, it does not directly address the issue of optimizing the entire regression suite effectively.B . Using a random subset of tests.Randomly selecting test cases may miss critical tests and does not ensure an optimized regression suite. This approach lacks a systematic method for ensuring comprehensive coverage.C . Automating test scripts using AI-based test automation tools.Automation helps in running tests efficiently but does not inherently optimize the selection of tests to prevent the suite from growing too large. D . Using an AI-based tool to optimize the regression test suite by analyzing past test results.This is the most effective approach as AI-based tools can analyze historical test data, identify patterns, and prioritize tests that are more likely to catch defects based on past results. This method ensures an optimized and manageable regression test suite by focusing on the most impactful test cases.Therefore, the correct answer is D because using an AI-based tool to analyze past test results is the best option to optimize regression test selection and manage the size of the regression suite effectively.
A . Identifying suitable tests by looking at the complexity of the test cases.
While complexity analysis can help in selecting important test cases, it does not directly address the issue of optimizing the entire regression suite effectively.
B . Using a random subset of tests.
Randomly selecting test cases may miss critical tests and does not ensure an optimized regression suite. This approach lacks a systematic method for ensuring comprehensive coverage.
C . Automating test scripts using AI-based test automation tools.
Automation helps in running tests efficiently but does not inherently optimize the selection of tests to prevent the suite from growing too large. 
D . Using an AI-based tool to optimize the regression test suite by analyzing past test results.
This is the most effective approach as AI-based tools can analyze historical test data, identify patterns, and prioritize tests that are more likely to catch defects based on past results. This method ensures an optimized and manageable regression test suite by focusing on the most impactful test cases.
Therefore, the correct answer is D because using an AI-based tool to analyze past test results is the best option to optimize regression test selection and manage the size of the regression suite effectively.



Question 2

Pairwise testing can be used in the context of self-driving cars for controlling an explosion in the number of combinations of parameters.
Which ONE of the following options is LEAST likely to be a reason for this incredible growth of parameters?


  1. Different Road Types
  2. Different weather conditions
  3. ML model metrics to evaluate the functional performance
  4. Different features like ADAS, Lane Change Assistance etc.
Correct answer: C
Explanation:
Pairwise testing is used to handle the large number of combinations of parameters that can arise in complex systems like self-driving cars. The question asks which of the given options is least likely to be a reason for the explosion in the number of parameters.Different Road Types (A): Self-driving cars must operate on various road types, such as highways, city streets, rural roads, etc. Each road type can have different characteristics, requiring the car's system to adapt and handle different scenarios. Thus, this is a significant factor contributing to the growth of parameters.Different Weather Conditions (B): Weather conditions such as rain, snow, fog, and bright sunlight significantly affect the performance of self-driving cars. The car's sensors and algorithms must adapt to these varying conditions, which adds to the number of parameters that need to be considered.ML Model Metrics to Evaluate Functional Performance (C): While evaluating machine learning (ML) model performance is crucial, it does not directly contribute to the explosion of parameter combinations in the same way that road types, weather conditions, and car features do. Metrics are used to measure and assess performance but are not themselves variable conditions that the system must handle.Different Features like ADAS, Lane Change Assistance, etc. (D): Advanced Driver Assistance Systems (ADAS) and other features add complexity to self-driving cars. Each feature can have multiple settings and operational modes, contributing to the overall number of parameters.Hence, the least likely reason for the incredible growth in the number of parameters is C. ML model metrics to evaluate the functional performance.ISTQB CT-AI Syllabus Section 9.2 on Pairwise Testing discusses the application of this technique to manage the combinations of different variables in AI-based systems, including those used in self-driving cars.Sample Exam Questions document, Question #29 provides context for the explosion in parameter combinations in self-driving cars and highlights the use of pairwise testing as a method to manage this complexity.
Pairwise testing is used to handle the large number of combinations of parameters that can arise in complex systems like self-driving cars. The question asks which of the given options is least likely to be a reason for the explosion in the number of parameters.
Different Road Types (A): Self-driving cars must operate on various road types, such as highways, city streets, rural roads, etc. Each road type can have different characteristics, requiring the car's system to adapt and handle different scenarios. Thus, this is a significant factor contributing to the growth of parameters.
Different Weather Conditions (B): Weather conditions such as rain, snow, fog, and bright sunlight significantly affect the performance of self-driving cars. The car's sensors and algorithms must adapt to these varying conditions, which adds to the number of parameters that need to be considered.
ML Model Metrics to Evaluate Functional Performance (C): While evaluating machine learning (ML) model performance is crucial, it does not directly contribute to the explosion of parameter combinations in the same way that road types, weather conditions, and car features do. Metrics are used to measure and assess performance but are not themselves variable conditions that the system must handle.
Different Features like ADAS, Lane Change Assistance, etc. (D): Advanced Driver Assistance Systems (ADAS) and other features add complexity to self-driving cars. Each feature can have multiple settings and operational modes, contributing to the overall number of parameters.
Hence, the least likely reason for the incredible growth in the number of parameters is C. ML model metrics to evaluate the functional performance.
ISTQB CT-AI Syllabus Section 9.2 on Pairwise Testing discusses the application of this technique to manage the combinations of different variables in AI-based systems, including those used in self-driving cars.
Sample Exam Questions document, Question #29 provides context for the explosion in parameter combinations in self-driving cars and highlights the use of pairwise testing as a method to manage this complexity.



Question 3

Which ONE of the following statements correctly describes the importance of flexibility for Al systems?


  1. Al systems are inherently flexible.
  2. Al systems require changing of operational environments; therefore, flexibility is required.
  3. Flexible Al systems allow for easier modification of the system as a whole.
  4. Self-learning systems are expected to deal with new situations without explicitly having to program for it.
Correct answer: C
Explanation:
Flexibility in AI systems is crucial for various reasons, particularly because it allows for easier modification and adaptation of the system as a whole.AI systems are inherently flexible (A): This statement is not correct. While some AI systems may be designed to be flexible, they are not inherently flexible by nature. Flexibility depends on the system's design and implementation.AI systems require changing operational environments; therefore, flexibility is required (B): While it's true that AI systems may need to operate in changing environments, this statement does not directly address the importance of flexibility for the modification of the system. Flexible AI systems allow for easier modification of the system as a whole (C): This statement correctly describes the importance of flexibility. Being able to modify AI systems easily is critical for their maintenance, adaptation to new requirements, and improvement.Self-learning systems are expected to deal with new situations without explicitly having to program for it (D): This statement relates to the adaptability of self-learning systems rather than their overall flexibility for modification.Hence, the correct answer is C. Flexible AI systems allow for easier modification of the system as a whole.ISTQB CT-AI Syllabus Section 2.1 on Flexibility and Adaptability discusses the importance of flexibility in AI systems and how it enables easier modification and adaptability to new situations.Sample Exam Questions document, Question #30 highlights the importance of flexibility in AI systems.
Flexibility in AI systems is crucial for various reasons, particularly because it allows for easier modification and adaptation of the system as a whole.
AI systems are inherently flexible (A): This statement is not correct. While some AI systems may be designed to be flexible, they are not inherently flexible by nature. Flexibility depends on the system's design and implementation.
AI systems require changing operational environments; therefore, flexibility is required (B): While it's true that AI systems may need to operate in changing environments, this statement does not directly address the importance of flexibility for the modification of the system. 
Flexible AI systems allow for easier modification of the system as a whole (C): This statement correctly describes the importance of flexibility. Being able to modify AI systems easily is critical for their maintenance, adaptation to new requirements, and improvement.
Self-learning systems are expected to deal with new situations without explicitly having to program for it (D): This statement relates to the adaptability of self-learning systems rather than their overall flexibility for modification.
Hence, the correct answer is C. Flexible AI systems allow for easier modification of the system as a whole.
ISTQB CT-AI Syllabus Section 2.1 on Flexibility and Adaptability discusses the importance of flexibility in AI systems and how it enables easier modification and adaptability to new situations.
Sample Exam Questions document, Question #30 highlights the importance of flexibility in AI systems.



Question 4

Written requirements are given in text documents, which ONE of the following options is the BEST way to generate test cases from these requirements?


  1. Natural language processing on textual requirements
  2. Analyzing source code for generating test cases
  3. Machine learning on logs of execution
  4. GUI analysis by computer vision
Correct answer: A
Explanation:
When written requirements are given in text documents, the best way to generate test cases is by using Natural Language Processing (NLP). Here's why:Natural Language Processing (NLP): NLP can analyze and understand human language. It can be used to process textual requirements to extract relevant information and generate test cases. This method is efficient in handling large volumes of textual data and identifying key elements necessary for testing.Why Not Other Options:Analyzing source code for generating test cases: This is more suitable for white-box testing where the code is available, but it doesn't apply to text-based requirements.Machine learning on logs of execution: This approach is used for dynamic analysis based on system behavior during execution rather than static textual requirements.GUI analysis by computer vision: This is used for testing graphical user interfaces and is not applicable to text-based requirements.
When written requirements are given in text documents, the best way to generate test cases is by using Natural Language Processing (NLP). Here's why:
Natural Language Processing (NLP): NLP can analyze and understand human language. It can be used to process textual requirements to extract relevant information and generate test cases. This method is efficient in handling large volumes of textual data and identifying key elements necessary for testing.
Why Not Other Options:
Analyzing source code for generating test cases: This is more suitable for white-box testing where the code is available, but it doesn't apply to text-based requirements.
Machine learning on logs of execution: This approach is used for dynamic analysis based on system behavior during execution rather than static textual requirements.
GUI analysis by computer vision: This is used for testing graphical user interfaces and is not applicable to text-based requirements.



Question 5

Upon testing a model used to detect rotten tomatoes, the following data was observed by the test engineer, based on certain number of tomato images.
 
For this confusion matrix which combinations of values of accuracy, recall, and specificity respectively is CORRECT?


  1. 0.87.0.9. 0.84
  2. 1,0.87,0.84
  3. 1,0.9, 0.8
  4. 0.84.1,0.9
Correct answer: A
Explanation:
To calculate the accuracy, recall, and specificity from the confusion matrix provided, we use the following formulas:Confusion Matrix:Actually Rotten: 45 (True Positive), 8 (False Positive)Actually Fresh: 5 (False Negative), 42 (True Negative)Accuracy:Accuracy is the proportion of true results (both true positives and true negatives) in the total population.Formula: Accuracy=TP+TNTP+TN+FP+FN\text{Accuracy} = \frac{TP + TN}{TP + TN + FP + FN}Accuracy=TP+TN+FP+FNTP+TNCalculation: Accuracy=45+4245+42+8+5=87100=0.87\text{Accuracy} = \frac{45 + 42}{45 + 42 + 8 + 5} = \frac{87}{100} = 0.87Accuracy=45+42+8+545+42=10087=0.87Recall (Sensitivity):Recall is the proportion of true positive results in the total actual positives.Formula: Recall=TPTP+FN\text{Recall} = \frac{TP}{TP + FN}Recall=TP+FNTPCalculation: Recall=4545+5=4550=0.9\text{Recall} = \frac{45}{45 + 5} = \frac{45}{50} = 0.9Recall=45+545=5045=0.9Specificity:Specificity is the proportion of true negative results in the total actual negatives.Formula: Specificity=TNTN+FP\text{Specificity} = \frac{TN}{TN + FP}Specificity=TN+FPTNCalculation: Specificity=4242+8=4250=0.84\text{Specificity} = \frac{42}{42 + 8} = \frac{42}{50} = 0.84Specificity=42+842=5042=0.84Therefore, the correct combinations of accuracy, recall, and specificity are 0.87, 0.9, and 0.84 respectively.ISTQB CT-AI Syllabus, Section 5.1, Confusion Matrix, provides detailed formulas and explanations for calculating various metrics including accuracy, recall, and specificity.'ML Functional Performance Metrics' (ISTQB CT-AI Syllabus, Section 5).
To calculate the accuracy, recall, and specificity from the confusion matrix provided, we use the following formulas:
Confusion Matrix:
Actually Rotten: 45 (True Positive), 8 (False Positive)
Actually Fresh: 5 (False Negative), 42 (True Negative)
Accuracy:
Accuracy is the proportion of true results (both true positives and true negatives) in the total population.
Formula: Accuracy=TP+TNTP+TN+FP+FN\text{Accuracy} = \frac{TP + TN}{TP + TN + FP + FN}Accuracy=TP+TN+FP+FNTP+TN
Calculation: Accuracy=45+4245+42+8+5=87100=0.87\text{Accuracy} = \frac{45 + 42}{45 + 42 + 8 + 5} = \frac{87}{100} = 0.87Accuracy=45+42+8+545+42=10087=0.87
Recall (Sensitivity):
Recall is the proportion of true positive results in the total actual positives.
Formula: Recall=TPTP+FN\text{Recall} = \frac{TP}{TP + FN}Recall=TP+FNTP
Calculation: Recall=4545+5=4550=0.9\text{Recall} = \frac{45}{45 + 5} = \frac{45}{50} = 0.9Recall=45+545=5045=0.9
Specificity:
Specificity is the proportion of true negative results in the total actual negatives.
Formula: Specificity=TNTN+FP\text{Specificity} = \frac{TN}{TN + FP}Specificity=TN+FPTN
Calculation: Specificity=4242+8=4250=0.84\text{Specificity} = \frac{42}{42 + 8} = \frac{42}{50} = 0.84Specificity=42+842=5042=0.84
Therefore, the correct combinations of accuracy, recall, and specificity are 0.87, 0.9, and 0.84 respectively.
ISTQB CT-AI Syllabus, Section 5.1, Confusion Matrix, provides detailed formulas and explanations for calculating various metrics including accuracy, recall, and specificity.
'ML Functional Performance Metrics' (ISTQB CT-AI Syllabus, Section 5).



Question 6

The activation value output for a neuron in a neural network is obtained by applying computation to the neuron.
Which ONE of the following options BEST describes the inputs used to compute the activation value?


  1. Individual bias at the neuron level, activation values of neurons in the previous layer, and weights assigned to the connections between the neurons.
  2. Activation values of neurons in the previous layer, and weights assigned to the connections between the neurons.
  3. Individual bias at the neuron level, and weights assigned to the connections between the neurons.
  4. Individual bias at the neuron level, and activation values of neurons in the previous layer.
Correct answer: A
Explanation:
In a neural network, the activation value of a neuron is determined by a combination of inputs from the previous layer, the weights of the connections, and the bias at the neuron level. Here's a detailed breakdown:Inputs for Activation Value:Activation Values of Neurons in the Previous Layer: These are the outputs from neurons in the preceding layer that serve as inputs to the current neuron.Weights Assigned to the Connections: Each connection between neurons has an associated weight, which determines the strength and direction of the input signal.Individual Bias at the Neuron Level: Each neuron has a bias value that adjusts the input sum, allowing the activation function to be shifted.Calculation:The activation value is computed by summing the weighted inputs from the previous layer and adding the bias.Formula: z=(wiai)+bz = \sum (w_i \cdot a_i) + bz=(wiai)+b, where wiw_iwi are the weights, aia_iai are the activation values from the previous layer, and bbb is the bias.The activation function (e.g., sigmoid, ReLU) is then applied to this sum to get the final activation value.Why Option A is Correct:Option A correctly identifies all components involved in computing the activation value: the individual bias, the activation values of the previous layer, and the weights of the connections.Eliminating Other Options:B . Activation values of neurons in the previous layer, and weights assigned to the connections between the neurons: This option misses the bias, which is crucial.C . Individual bias at the neuron level, and weights assigned to the connections between the neurons: This option misses the activation values from the previous layer.D . Individual bias at the neuron level, and activation values of neurons in the previous layer: This option misses the weights, which are essential.ISTQB CT-AI Syllabus, Section 6.1, Neural Networks, discusses the components and functioning of neurons in a neural network. 'Neural Network Activation Functions' (ISTQB CT-AI Syllabus, Section 6.1.1).
In a neural network, the activation value of a neuron is determined by a combination of inputs from the previous layer, the weights of the connections, and the bias at the neuron level. Here's a detailed breakdown:
Inputs for Activation Value:
Activation Values of Neurons in the Previous Layer: These are the outputs from neurons in the preceding layer that serve as inputs to the current neuron.
Weights Assigned to the Connections: Each connection between neurons has an associated weight, which determines the strength and direction of the input signal.
Individual Bias at the Neuron Level: Each neuron has a bias value that adjusts the input sum, allowing the activation function to be shifted.
Calculation:
The activation value is computed by summing the weighted inputs from the previous layer and adding the bias.
Formula: z=(wiai)+bz = \sum (w_i \cdot a_i) + bz=(wiai)+b, where wiw_iwi are the weights, aia_iai are the activation values from the previous layer, and bbb is the bias.
The activation function (e.g., sigmoid, ReLU) is then applied to this sum to get the final activation value.
Why Option A is Correct:
Option A correctly identifies all components involved in computing the activation value: the individual bias, the activation values of the previous layer, and the weights of the connections.
Eliminating Other Options:
B . Activation values of neurons in the previous layer, and weights assigned to the connections between the neurons: This option misses the bias, which is crucial.
C . Individual bias at the neuron level, and weights assigned to the connections between the neurons: This option misses the activation values from the previous layer.
D . Individual bias at the neuron level, and activation values of neurons in the previous layer: This option misses the weights, which are essential.
ISTQB CT-AI Syllabus, Section 6.1, Neural Networks, discusses the components and functioning of neurons in a neural network. 
'Neural Network Activation Functions' (ISTQB CT-AI Syllabus, Section 6.1.1).



Question 7

Which ONE of the following tests is LEAST likely to be performed during the ML model testing phase?


  1. Testing the accuracy of the classification model.
  2. Testing the API of the service powered by the ML model.
  3. Testing the speed of the training of the model.
  4. Testing the speed of the prediction by the model.
Correct answer: C
Explanation:
The question asks which test is least likely to be performed during the ML model testing phase. Let's consider each option:Testing the accuracy of the classification model (A): Accuracy testing is a fundamental part of the ML model testing phase. It ensures that the model correctly classifies the data as intended and meets the required performance metrics.Testing the API of the service powered by the ML model (B): Testing the API is crucial, especially if the ML model is deployed as part of a service. This ensures that the service integrates well with other systems and that the API performs as expected.Testing the speed of the training of the model (C): This is least likely to be part of the ML model testing phase. The speed of training is more relevant during the development phase when optimizing and tuning the model.During testing, the focus is more on the model's performance and behavior rather than how quickly it was trained.Testing the speed of the prediction by the model (D): Testing the speed of prediction is important to ensure that the model meets performance requirements in a production environment, especially for real-time applications.ISTQB CT-AI Syllabus Section 3.2 on ML Workflow and Section 5 on ML Functional Performance Metrics discuss the focus of testing during the model testing phase, which includes accuracy and prediction speed but not the training speed.
The question asks which test is least likely to be performed during the ML model testing phase. Let's consider each option:
Testing the accuracy of the classification model (A): Accuracy testing is a fundamental part of the ML model testing phase. It ensures that the model correctly classifies the data as intended and meets the required performance metrics.
Testing the API of the service powered by the ML model (B): Testing the API is crucial, especially if the ML model is deployed as part of a service. This ensures that the service integrates well with other systems and that the API performs as expected.
Testing the speed of the training of the model (C): This is least likely to be part of the ML model testing phase. The speed of training is more relevant during the development phase when optimizing and tuning the model.
During testing, the focus is more on the model's performance and behavior rather than how quickly it was trained.
Testing the speed of the prediction by the model (D): Testing the speed of prediction is important to ensure that the model meets performance requirements in a production environment, especially for real-time applications.
ISTQB CT-AI Syllabus Section 3.2 on ML Workflow and Section 5 on ML Functional Performance Metrics discuss the focus of testing during the model testing phase, which includes accuracy and prediction speed but not the training speed.



Question 8

A software component uses machine learning to recognize the digits from a scan of handwritten numbers. In the scenario above, which type of Machine Learning (ML) is this an example of?


  1. Reinforcement learning
  2. Regression
  3. Classification
  4. Clustering
Correct answer: C
Explanation:
Recognizing digits from a scan of handwritten numbers using machine learning is an example of classification. Here's a breakdown:Classification: This type of machine learning involves categorizing input data into predefined classes. In this scenario, the input data (handwritten digits) are classified into one of the 10 digit classes (0-9).Why Not Other Options:Reinforcement Learning: This involves learning by interacting with an environment to achieve a goal, which does not fit the problem of recognizing digits.Regression: This is used for predicting continuous values, not discrete categories like digit recognition.Clustering: This involves grouping similar data points together without predefined classes, which is not the case here.
Recognizing digits from a scan of handwritten numbers using machine learning is an example of classification. Here's a breakdown:
Classification: This type of machine learning involves categorizing input data into predefined classes. In this scenario, the input data (handwritten digits) are classified into one of the 10 digit classes (0-9).
Why Not Other Options:
Reinforcement Learning: This involves learning by interacting with an environment to achieve a goal, which does not fit the problem of recognizing digits.
Regression: This is used for predicting continuous values, not discrete categories like digit recognition.
Clustering: This involves grouping similar data points together without predefined classes, which is not the case here.



Question 9

Which ONE of the following approaches to labelling requires the least time and effort?


  1. Outsourced 
  2. Pre-labeled dataset
  3. Internal
  4. Al-Assisted
Correct answer: B
Explanation:
Labelling Approaches: Among the options provided, pre-labeled datasets require the least time and effort because the data has already been labeled, eliminating the need for further manual or automated labeling efforts.Reference: ISTQB_CT-AI_Syllabus_v1.0, Section 4.5 Data Labelling for Supervised Learning, which discusses various approaches to data labeling, including pre-labeled datasets, and their associated time and effort requirements.
Labelling Approaches: Among the options provided, pre-labeled datasets require the least time and effort because the data has already been labeled, eliminating the need for further manual or automated labeling efforts.
Reference: ISTQB_CT-AI_Syllabus_v1.0, Section 4.5 Data Labelling for Supervised Learning, which discusses various approaches to data labeling, including pre-labeled datasets, and their associated time and effort requirements.



Question 10

In a certain coffee producing region of Colombia, there have been some severe weather storms, resulting in massive losses in production. This caused a massive drop in stock price of coffee.
Which ONE of the following types of testing SHOULD be performed for a machine learning model for stock-price prediction to detect influence of such phenomenon as above on price of coffee stock.


  1. Testing for accuracy
  2. Testing for bias
  3. Testing for concept drift
  4. Testing for security
Correct answer: C
Explanation:
Type of Testing for Stock-Price Prediction Models: Concept drift refers to the change in the statistical properties of the target variable over time. Severe weather storms causing massive losses in coffee production and affecting stock prices would require testing for concept drift to ensure that the model adapts to new patterns in data over time.Reference: ISTQB_CT-AI_Syllabus_v1.0, Section 7.6 Testing for Concept Drift, which explains the need to test for concept drift in models that might be affected by changing external factors.
Type of Testing for Stock-Price Prediction Models: Concept drift refers to the change in the statistical properties of the target variable over time. Severe weather storms causing massive losses in coffee production and affecting stock prices would require testing for concept drift to ensure that the model adapts to new patterns in data over time.
Reference: ISTQB_CT-AI_Syllabus_v1.0, Section 7.6 Testing for Concept Drift, which explains the need to test for concept drift in models that might be affected by changing external factors.









CONNECT US

Facebook

Twitter

PROFEXAM WITH A 20% DISCOUNT

You can buy ProfExam with a 20% discount!



HOW TO OPEN VCEX FILES

Use ProfExam Simulator to open VCEX files