RCR Technology offers Quality Assurance (QA) services across a broad scope of data management and application development and management platforms. Our testing capabilities address both business and quality challenges for our clients. Our QA services are deeply embedded and a critical component of our Software Development Life Cycle (SDLC) Methodology.

Our SDLC Methodology has been developed and executed over dozens of application development projects and many years of successful client engagements.  A PDF copy of this complete document is available for review by clicking on the following button.

SDLC PDF 

The Quality Assurance phase utilizes the business or solution requirements, functional specifications, technical design and architectural specifications developed in the prior phases to complete the development of the software changes required to implement the solution. This phase also includes the unit testing associated with the software changes.

The QA team utilizes a series of tools to help manage the system codebase and to facilitate the migration of the code base from one environment to another (i.e. DEV to SIT, SIT to UAT, etc.). Code is stored in a repository and is checked in and out by developers to make changes. This ensures that the right environment uses the correct corresponding codebase.

System Integration Testing

The purpose of this phase is to conduct the System Integration Testing (SIT) for the software including the integration of the software changes with the system components and the end-to-end testing of the overall system. The emphasis of this testing is on the functional changes to the system. Testing occurs in a separate environment from code development and subsequent User Acceptance Testing.

Testing is conducted and includes manual testing of specific functions, automated testing using scripts that execute selected processes and regression testing that includes a specific set of functions that are tested for every release. This phase may also include performance testing that is used to assess the potential impact of architectural changes or major functionality/system enhancements, as well as, the potential impact to the system of increased levels of volume/stress.

The SIT process is an iterative process wherein testers ensure that new functionality performs as specified in the requirements documents. This test stage also validates that existing functionality was not affected by enhancements and defect fixes incorporated into the code. As defects are identified in the updated code, they are documented and fixed by the developers. New versions of the code are released periodically for continued SIT.

The testing is conducted by using scenarios or test scripts that detail what information is expected in the system to conduct the test, what steps the tester should take to execute the test scenario, and what the expected results should be. The SIT tester compares both the steps planned to be taken and the actual steps executed in the system as well as the planned outcome versus the actual outcome to determine if the test passes or not.

The output of this phase is a fully tested system that is ready for User Acceptance Testing (UAT). Defects identified during SIT will be documented and resolved, unless agreement is reached with the Stakeholders to move forward without a resolution to the defect.

User Acceptance Testing Description

The purpose of this phase is to conduct the User Acceptance Testing (UAT) for the software changes utilizing the test plans and associated test cases developed by the UAT team. Testing occurs in a separate environment from code development and System Testing. UAT is conducted by State and non-Application Services staff and includes manual testing of specific functions, automated testing using scripts that execute selected processes and regression testing that includes a specific set of functions that are tested for every release.

Similar to SIT, the UAT process is an iterative process wherein testers ensure that new functionality performs as specified and that existing functionality was not affected by enhancements and defect fixes incorporated into the code. As defects are identified in the updated code, developers fix the defects and new versions of the code are released for continued UAT.

The testing is conducted by using scenarios or test scripts that detail what information is expected in the system to conduct the test, what steps the tester should take to execute the test scenario, and what the expected results should be. The UAT tester compares both the steps planned to be taken and the actual steps executed in the system as well as the planned outcome versus the actual outcome to determine if the test passes or not.

As part of the UAT phase, a final AppScan security test is performed. AppScan is a tool utilized to perform dynamic application security testing (DAST) against the code base for the release. The purpose of DAST is to identify application vulnerabilities that may have been introduced by changes or enhancements during a release and develop a plan of action to address identified vulnerabilities. AppScan runs are not limited to the UAT phase and may also occur during the Construction (Build and Unit Test), SIT, and Post-Implementation Support phases of the SDLC process. AppScan runs may be executed against the entire application build or may be executed against specific pages and/or urls to ensure that any targeted vulnerabilities have been resolved and that no new vulnerabilities have been introduced.

The output of this phase is a fully tested system and concurrence from the UAT team to move forward with deployment of the code to Production. Defects identified during testing will be documented and resolved, unless agreement is reached with the stakeholders to move forward without a resolution to the defect.

RCR Best Practices Approach for Successful Application Development:

  • Early and frequent involvement of the QA Team during requirements vetting
  • Development of a detailed workflow document of the functional solution requirements
  • Creation of the Test Plan to validate requirement expectations
  • Beta testing for defects and functional testing review
  • Continuous solution improvements via real-time client collaboration
  • Knowledge management documentation: Archived work product including application documentation, workflow diagrams, test cases, UAT results, performance tests, training materials