Sign In / Sign Out
Navigation for Entire University
- ASU Home
- My ASU
- Colleges and Schools
- Map and Locations
The SIO Quality Assurance group is an evolving group of experts who remain flexible and versatile in their approach to helping customers achieve successful IT projects that are highly usable, with a high level of confidence before go-live. QA services are tailored to each project and include support before, during, and after project testing phases, up to go-live, and during the maintenance and upgrade phases that follow.
Services offered by the Quality Assurance Team are listed below. Descriptions of each can be found in the Quality Assurance Process Definitions.
A testing agreement contains the planned QA project deliverables, testing objectives, project team, and the approach by which testing will be accomplished for a software or hardware change. Often referred to as a “test plan,” the agreement clearly sets forth expectations and estimates for the testing team and the stakeholders.
Written at the completion of a project, the test report contains the testing results in a format that allows the project team and stakeholders to evaluate the results along with any remaining risks or limitations. The report compares the test results with test objectives and project requirements. It also directs the project team to where the test results and any issues are documented and saved. Most sections of the report mirror the testing agreement, as the report is the conclusion to the agreed testing that the QA team was to perform.
Functional testing verifies that each software function operates in conformance with the requirement specification. This testing mainly involves “black box testing” in that it is not based on the source code of the application. Commonly-used and high-risk functionality of the system is tested by providing appropriate input, verifying the output, and comparing the actual results with the expected results. This testing involves checking of the user interface, Application Programming Interface (API), database, security, client/server applications, and the intended functionality of the application under test.
Integration testing is a software development process during which program units are combined and tested as groups in multiple ways. In this context, a unit is defined as the smallest testable part of an application. Integration testing can expose problems with the interfaces among program components before issues occur in actual program execution.
The purpose of regression testing is to find bugs that may be introduced accidentally because of the new software changes or additions. Along with the new changes and additions, it is very important to test whether the existing functionality remains intact or not.
User acceptance testing (UAT) is the last phase of the software testing process. During UAT, actual software users test the application to ensure it can handle required tasks in real-world scenarios, according to specifications. UAT is one of the final and critical software project procedures that must occur before newly developed software is released to the user community.
Positive testing determines that your application works as expected. If an error is encountered during positive testing, the test fails. Negative testing ensures that your application properly handles invalid input or unexpected user behavior. Also, negative testing helps you improve the quality of your application and find its weak points.
Performance testing is the process of determining the speed or effectiveness of a computer, network, software program, or device. When possible, QA performs this type of testing to determine response times when running through test scenarios or workflows.
A load test can be conducted to understand the behavior of the system under a specific expected load. The load can be the expected concurrent number of users on the application, performing a specific number of transactions within the set duration. A stress test pushes the limits by adding an even higher set number of concurrent users.
Through collaboration with the ASU Marketing Hub, QA can work through an intensive checklist from https://brandguide.asu.edu/ to let the project team know where their site might not be following the ASU Web Standards.
Federal and state law and university policy require that ASU’s programs and services be available to persons with disabilities. Websites that provide information about ASU, that provide information for employees, or that are related in any way to instruction of students must be accessible. For more information, see the Office of General Counsel’s FAQs on Website Accessibility. Using our testing tools, the QA team can run a scan through your website to check for ADA compliance and give feedback on how to improve it.
QA can use our physical devices, as well as emulators, to test the mobile experience in an application or mobile browser. We test for functionality, usability, and consistency.
It is easy to forget sometimes who the audience is when designing a software solution at ASU. For example, if students are the intended user group, our team will focus on what is best for the student experience.
Sometimes, there is simply not enough project time allowed for complete QA testing. In these situations, we perform selected verification and validation of high-risk and high-visibility features until more adequate application testing can be performed. This type of verification is sometimes referred to as “ad hoc testing” and may be based on urgent need.
When provided with a functional or configuration specification, requirements document, access, and workflows, the QA team can create tests(manual and/or automated) for development projects. The tests typically cover a detailed validation of the eventual workflow and related software requirements. Types of testing are described above.
Test automation is the use of special software (separate from the software being tested) to control the creation and execution of tests and the comparison of actual results with expected results. The QA team is always looking for new opportunities to automate routine testing, when possible and practical.
Our team can find gaps in your testing approach and help you avoid errors made due to recent, untested changes. In doing so, together with you, we optimize the interaction between developers and testers and help prevent the need for urgent fixes after the system’s release.
Coordinating with different groups, users, and campuses can be difficult to accomplish. If you have a large project, with a large or diverse user group, we can help set up meetings in large spaces or set up remote sessions to allow for collaborative testing sessions. We also offer to train new testers on the testing process, including issue creation and tracking.