脚印

一脚一印 一点一滴 【欢迎光临·转载请注明出处】
  博客园  :: 首页  :: 新随笔  :: 联系 :: 订阅 订阅  :: 管理

测试的专业词汇及解释(英文)

Posted on 2008-11-04 11:22  August  阅读(601)  评论(0编辑  收藏  举报

最近对各类测试的英语术感兴趣,特找了一些,供自己查阅。嘎嘎

1.          Acceptance tests – formal testing of a feature often carried out by the customer to determine whether to accept the release. Also known as User Acceptance tests or UAT.

2.         Accessibility testing – testing to verify whether users with disabilities can use the system

3.         Ad hoc testing – testing carried out on a feature in a way that uses the system in an exploratory way

4.         Agile – A development methodology brought about to aid rapid development. An agile methodology is composed of iterations which comprise includes planning, requirements analysis, design, coding, testing, and API application programming interface.

5.         Assert – Helper methods available to the developer. The methods form part of a test framework. An assert verifies whether the method under test is working correctly.

6.        Automation – software used to the test product in a reproducible way. Automation can be applied to any repetitive task and also used to test areas where pure manual testing would prove difficult for example testing a web service.

7.         Baseline – normally used in the context of signed off specifications that have been agreed with the customer.

8.        Best practice – a set of behaviours or practices that define the best or most innovative approach to solving a business problem.

9.        Black box testing – tests that exercise the product from the outside with no knowledge of the internal workings of the product.

10.      Blocked test – a test that is blocked due to pre-conditions failing. For example user logs into the system to the go on to further verify specific functionality of the system. A blocked test could also occur due to development not being complete for the test or steps in the test.

11.        Boundary testing – Tests that exercise edge values. Tests in this area exercise the path that may not occur under normal conditions. Boundary tests exercise input values for what the code expects to be valid or tests that input an external source location. For example a file upload test.

12.       Bug – Which is also be known as a defect or issue. A bug is a flaw in the software or configuration that causes a failure instead of the expected result.

13.       Bug report – also defined as a defect management report. This report can be run daily, hourly or queried at any moment in time via the test management system.

14.      BVT – Build verification tests - Tests done after a build to ensure that the build is good for further testing or development. The tests verify that critical parts of the system perform as expected. Further testing should not progress until all BVTs pass. BVTs are also known as smoke tests.

15.       Build process – Defined by development includes development rules such as FXcop and Style cop, check in procedure, general coding rules and rules specific to the project

16.      Change management process – a formal process agreed between the program/project team and the customer for controlling change 

17.       Clean up – usually involved in clearing up data that is no longer required by the test or closing down windows that are opened as part of an automated test.

18.      Code complete – The point at which the feature has been fully implemented. At this point a test pass will occur and the only development work will be to fix bugs found from extensive testing at this stage.

19.      Code coverage – a tool that works in conjunction with automated tests to verify test coverage of the statements and functions of a system.

20.     Compliance testing – testing to verify that customer requirements mapped to the specification are met in terms of test coverage.

21.       Component integration testing – testing to determine inputs and expected outputs to interfaces and the interaction between components

22.     Concurrency – tests that verify how a component handles two or more simultaneous actions.

23.      Data driven testing – a technique that stores test input values and expected results in a data storage area (Excel, array, database) for use with a automation script. This technique is sometimes known as key word based testing.

24.     Daily build – a build of the complete system as it stands compiled at a specific time of the day normally at night. As the build increases in completeness the build should be deployed to a test rig for a build verification test to be run.

25.     DCR – Design change request.  This is a formal request either by the customer or internally for a change to the product. A DCR is treated with the same process as a bug in that it gets reviewed by triage.

26.     Debug build – the debug build of the software, it contains extra error handling and debugging information.

27.     Debugging – the process of finding the failures of code and repairing the failures.

28.     Deployment – the process by which the build is installed onto a test or production environment

29.     Dev Lead – Developer Lead who is part of the key stakeholders of the project

30.     Engineering – A term to define the collective of development, test and infrastructure

31.       Entry criteria – This is a test term defined so that at the end of a feature set or an iteration test can enter a stabilisation run.

32.      Exit criteria – This is agreed up front as to the overall expected quality of the product prior to delivery. The exit criteria is defined in hard terms, for example no open severity 1 or 2 bugs, no more than 15% severity 3 bugs open and a plan to fix remaining outstanding bugs.

33.      Feature Set – a stage of the overall solution whereby defined features can be delivered to the customer.

34.      Functional testing – This testing normally occurs at the user interface and involves testing the system against defined business requirements or user stories.

35.      Functional requirement – A business requirement that must be met in order to satisfy acceptance of the solution

36.     Feature Test Lead – on the larger projects a feature test lead will be responsible for a specific subset of the overall solution.

37.      Flip/flop – a term used by test to move from one test environment to another. During feature testing normally two test rigs will be used. So that a rig can always have the latest daily build deployed. The second rig being a backup for test in case the deployment is broken.

38.     Infrastructure – a term used to define how the solution is put together in a logical manner for areas such as network protocol, web farm layout.

39.     Integration tests automated tests that verify how components work together. Integration tests do not test the internal implementation but rather what the system aims to achieve.

40.     Lock down – this is where the system has group policies and other security measures applied. This is normally performed on the pre-production environment where the test system closely resembles the final production environment.

41.       Look and feel – the design of the product used in the context of user interface testing. Look and feel reflects whether the product matches the branding and functions in a consistent manner.

42.     MSF – Microsoft Solutions Framework.  This is Microsoft’s methodology for defining how a project should product a software solution.

43.      NFR – non functional requirements. These requirements cover areas such as performance and security and do not relate to functional requirements.

44.     Post mortem – a wash up meeting at the end of a project, feature set or iteration to determine what went well and what can be improved on

45.     Priority – When a bug is raised the priority determines how important it is to fix the bug

46.     Program Manager – based on roles in the MSF team it is the title of someone who defines with the customer the functional requirements of the system. A program manager owns a feature set or iteration and works closely with development and test engineers as well as the customer.

47.     Project Manager – the person who leads the project team and is responsible for the completion of the project.

48.     Project plan – a plan produced by the project manager. It is a formal, approved document used to guide both project execution and project control.

49.     Record and play back tool – also defined as a capture replay tool. Used in the context of manually recording with a tool a set of actions on the user interface and then playing back the actions.

50.     Regression – a failure in a previously passing test

51.       Release build – A build that contains no debug code.

52.     Release candidate - A release that the project team believes fit to become the final RTM build

53.      Rig – A cut down test environment that is representative of the final production environment.

54.     Risk and Issues register – A list of risks and potential mitigations that may affect the overall delivery of the project

55.     RTM – release to manufacture – the point at which the product is deemed fit for release to the customer

56.     SDET – Software Development Engineer in Test. A test engineer who programmatically tests software and creates test tools to aid other engineers in the testing of software. An SDET often tests at the white box level meaning that the SDET may examine the implementation of the area under test, in order to better understand how to test it.

57.     STE – Software Test Engineer. An engineer who runs tests either manually or using automation tools against a functional test environment. STEs often test at the black box level. Meaning that the tester does not known the inner workings of the system.

58.     Set up – used in a test context all the pre-requisites that are required to run a test case such as user must be logged on or browser version is Internet Explorer 6 sp2.

59.     Severity – this is the business severity of a bug raised. Bugs are raised based on the impact the bug has on the overall solution. As a guideline a Severity 1 bug will either cause a major crash, data-loss or in stabilisation a bug that blocks any further testing to be carried out. A severity 4 bug would be a bug that may be about typos, unclear warning messages or minor look and feel issues.

60.    Show stopper – A severity 1 bug that stops a feature or release shipping on time [prior 0].

61.      Stakeholders – parties who are involved in the project who have a vested interest in the success of the project. This includes externally the customer of the project, directors internally who are not involved on a day to day basis with the project and key management staff of the project.

62.     Source control – a central repository for storing developer and test code managed by a source control system

63.     Technical specification – a specification created by the developer lead that defines how the design has been implemented

64.     Test case – a test case is a set of steps that verify a business requirement

65.     Test Lead – sometimes known as a test manager. This person is responsible for the management of all testing activities and test resources.

66.    Test plan – a document describing the scope, test approach and timeline for testing the project.

67.     Test strategy - a high level document that describes the test levels to be performed overall.

68.    Test Suites – a set of related tests that sometimes can be can be known as ordered tests

69.    Triage - A meeting to review newly opened and re-opened bugs and decide when the bugs will be fixed

70.     Unit tests – A set of developer tests that verify individual software components.

71.       Version control – used for centrally managing multiple revisions of code.

72.     VPC – virtual PC, used extensively when cross platform, cross browser testing is needed. Also used to provide builds in a box to the customer.

73.      White box testing – test cases that are created based on the analysis of the internal structure of the system.

74.     Work item – a task that is created for developers, tests in infrastructure that defines a piece of work that needs to be carried out on the project. When work items are complete the items are usually resolved to test. In some cases further tests are created as result of the work item.

75.     Zero bug bounce – This is the time at code complete or very near code complete where no active high severity bugs exist. It is the time where the team have fixed all active bugs that require fixing for a specific release and relates closely to the test teams entry criteria.