Testing is an integral part of any software endeavor and underpins building dependable systems. Testing activities, which nowadays rely greatly on automation and target both functional and non-functional requirements, form the backbone of a high-quality software solution that performs its function as intended. The undergraduate engineering course for “Software Testing and Quality Assurance” held its annual exhibit on December 10th in the Atrium of Alfaisal University. The course introduces software engineering students to concepts, principles, theory, types, tools, and techniques of software testing and quality assurance (QA) in digital systems.
In this course, students learn fundamentals of testing principles, approaches to testing (specification-based and structural), input space analysis, domain modeling, and designing for testability. Moreover, students explore different approaches for software testing to gain sufficient breadth in testing types – targeting both functional requirements (unit and end-to-end testing) and quality attributes (accessibility, usability, emergent properties and performance testing). Theoretical concepts of software testing are introduced to support the practical activities that software Test Engineers or QA officers follow in designing test inputs, producing test case values, running test scripts, analyzing results, and reporting results to software developers and project managers.
In its second half, the course introduces software testing and QA practices through a project component organized around open source platforms, robotics operating systems or micro services – an architectural style that gained significant adoption in the industry for building scalable and maintainable systems. Software engineering students gain practical experience in planning and designing test cases and streamlining testing tool integration and/or automation. Each team designs various types of test cases for a selected digital platform or service and showcases the findings of their testing process in an exhibit at the end of the semester.
In the Fall 2024 semester, students showcased software testing projects that demonstrate a broad range of approaches for collecting, managing, and evaluating software quality metrics. The applied domains included game engines such as GoDot, open source coding platforms such as ScratchJr, open source software development kits for robotics applications such as ROS, and evaluating the performance of Arabic Large Language Models (LLMs) such as Labib which was conducted in collaboration with an emerging startup for Arabic LLMs in Saudi Arabia, AskLabib.ai.
Students analyzed and verified a variety of software properties including, but not limited to, functionality, security, reliability, and performance and gained experience with real quality assurance (qa) tools including static analysis tools, software testing frameworks, and software quality measurement tools. In the projects that were conducted in collaboration with industry partners, students presented their findings to the stakeholders and described the analytical tools and techniques they considered for their scope of testing and quality assurance. Across all projects in this senior-level course, students reflect on the relationship between software development testing and product life cycle support by covering unit, integration, system, usability and/or acceptance testing.
The software testing course is a senior-level course in the Software Engineering program at Alfaisal University. The course team involves instructors, assistants and collaborators. The team is led by Dr. Areej Al-Wabil as the lead instructor, and Eng. Mohamed Khalid Hassan as the course assistant. Several judges took part in evaluating the term projects including faculty and instructors in the Software Engineering Department (Dr. Taghreed Altamimi, Ms. Sara Alhamdani, Eng. Jomalyn Pancho, Eng. Muhammad Herwis, Ms. Safia Dawood) and the Electrical Engineering Department (Dr. Asem Alalwan, Eng. Abdullah Adam).
Reflecting on the PBL approach, Dr. Al-Wabil highlighted that “Although PBL is praised for its ability to develop domain-specific and domain-general skills for software engineers, consistently designing effective PBL experiences for software testing courses is challenging, primarily due to the rapidly evolving landscape of automated testing methods and ensuring alignment with learning objectives of the academic course.” Through a research lens, the course team instantiates PBL experiences for the fourth-year undergraduate course on software testing and QA, running the experience, and reflecting on the gathered data. In 2023, the course team aligned the projects with the digital accessibility compliance initiative led by the Digital Government Authority (DGA) and in 2024, the scope was expanded to include game engines and Generative AI platforms. The course provides a balance of theory and practical application, thereby presenting software testing as a collection of objective, quantitative activities that can be measured and repeated in applied contexts of software development.