SUPPORTING STATEMENT
United States Patent and Trademark Office
Patents External Quality Survey
OMB Control Number 0651-0057
2025
Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the information collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the information collection as a whole. If the information collection has been conducted previously, include the actual response rate achieved during the last information collection.
The respondent pool for this survey is made up of the businesses, organizations, and individuals who frequently file patent applications. Foreign entities will not be included in the sample frame.
The target population consists of individuals associated with USPTO top filers (e.g., firms at a given address who have filed five or more patent applications in the past year). The sample unit will be the USPTO-registered agents/attorneys associated with the top filers and also independent inventors that filed six or more patents in the past 12 months. The target population typically accounts for over 97% of all domestic patent applications filed in a given fiscal year.
The sample size for each wave is around 3,200 respondents. Historic response rates for the Patents External Quality Survey are shown below.
Wave Name |
Survey Reference Period |
Response Rate (weighted) |
FY20-Q1 |
October 2019 – December 2019 |
37% |
FY20-Q3 |
May 2020 – July 2020 |
28% |
FY21-Q1 |
October 2020 – December 2020 |
29% |
FY21-Q3 |
May 2021 – July 2021 |
25% |
FY22-Q1 |
October 2021 – December 2021 |
32% |
FY22-Q3 |
May 2022 – July 2022 |
30% |
FY23-Q1 |
October 2022 – December 2022 |
27% |
FY23-Q3 |
May 2023 – July 2023 |
28% |
FY24-Q1 |
October 2023 – December 2023 |
24% |
FY24-Q3 |
May 2024 – July 2024 |
23% |
FY25-Q1 |
October 2024-December 2024 |
22% |
Describe the procedures for the collection of information including:
• statistical methodology for stratification and sample selection;
• estimation procedure;
• degree of accuracy needed for the purpose described in the justification; and
• unusual problems requiring specialized sampling procedures.
The Patents External Quality Survey will use a longitudinal, rotating panel design. The USPTO has developed a sampling plan which is included in this submission. The sampling plan also contains information about the respondent pool and the response rate.
The sample is drawn from a frame of USPTO customers, all of whom are either associated with a particular firm or are considered independent. There are six sampling domains for which different sampling rates are used. One of these six sampling domains is identified for each customer on the frame, using counts of the number of applications within each firm in conjunction with a count of agents associated with that firm. Then a sampling rate is computed for each domain.
The USPTO uses a rotating panel design for the sample, such that customers are assigned to waves (survey period) and then to one of two panels within each wave. The second panel from each wave is fielded in the subsequent wave, in addition to a new panel.
After being selected for two consecutive waves, customers are not included in the sample for at least 18 months. Because of this 18-month leave of absence from the sample, it is necessary to control for when the old sample can rotate back into the sample. A complication is that there is some potential for panel conditioning effects from being in the old cycle. Therefore, to reduce the impact from the distributional differences between frames, newly sampled cases from old panels are spread out evenly across the new panels.
Next, the USPTO sends a pre-notification letter through Westat, the USPTO’s survey contractor, to all potential respondents informing them of the purpose of the survey and including instructions for completing the survey online. Following this, Westat will email the survey to all of the sampled respondents. The username, password, and the 5-digit survey ID number will be provided by Westat.
During the follow-up non-response prompting calls, Westat employees use a script developed in collaboration with the USPTO. Westat sends non-respondents a reminder postcard to encourage survey participation.
Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For information collections based on sampling a special justification must be provided for any information collection that will not yield "reliable" data that can be generalized to the universe studied.
In order to maximize the number of responses received from the survey, the USPTO follows several well-established survey procedures. First, all sampled respondents receive a pre-notification letter signed by the Deputy Commissioner which explains the importance of the study and encourages respondent cooperation. Follow-up contact is made after the USPTO sends the initial web survey. One week after administering the survey, all non-respondents will be emailed or mailed a thank you/reminder postcard. Two weeks after the initial survey is sent, Westat employees telephone all of the non-respondents to prompt them to answer the survey. Westat uses a script developed for these phone calls so that everyone conducting these interviews asks the same questions in the same manner.
In order to determine how the non-response bias affected the results of the survey, Westat conducted non-response follow-up studies during Waves 6 and 7, and Waves 19 and 20 for the survey. The USPTO received the analysis of the findings. The objective of the study was to try to get a picture of how the non-respondents would have responded to the main survey if they had actually responded to the survey. The study was conducted because the non-responses can cause bias in the survey estimates, which is itself affected by the response rate to the survey and the differences between those who responded to the survey and those who did not respond.
As part of the studies, Westat sent a postcard to those who did not respond to the original survey and were rotating out of the survey sample. The postcard contained one question concerning overall examination quality that was asked in the original survey. This question was also asked in the original survey; the only difference between the two was that the follow-up question had an additional answer that was not included in the original survey. Half of the sample in the study received a white postcard and the other half received a colored postcard to see if the colored card would help increase the response rates.
The follow-up studies compared the responses to the overall examination quality question between those who responded to the question in the original survey in the outgoing panel with those that responded to the follow-up postcard (who were also in the outgoing panel). In these studies, it is assumed that the respondents to the follow-up survey are like the non-respondents to the original survey and that there are indications that the survey non-responses are causing a potential bias. The results of the follow-up studies were used to help answer the following questions:
How different are the wave respondents from the follow-up respondents?
How different are the follow-up respondents from the follow-up non-respondents?
Do the results impact what can be done in weighting to reduce the bias due to non-response?
What is the impact of the colored postcard on the follow-up response rates?
Non-response bias is affected by two different items: the non-response rate and the differences between respondents and non-respondents. While the response/non-response rate is known, the differences between those who respond and those who do not is unknown. The follow-up study attempts to measure the difference between the respondents and non-respondents. The non-response bias is calculated using the following equation for a sample mean:
where
is the weighted unit response rate,
is the population mean of the respondent stratum, and
is
the population mean of the respondent stratum and is the population
mean for the non-respondent stratum. While the response rate is
universally recognized as a measure of survey quality, the difference
between the respondents and non-respondents is just as important in
determining the non-response bias. Weighting adjustments are used to
reduce the non-response bias (although some non-response bias will
remain in the survey estimates).
However, in the case with the non-response follow-up sample, the bias can be written as:
where
is the population mean of the follow-up non-respondent stratum, and
is the population mean for the follow-up respondent stratum.
A bivariate analysis (response indicator versus each auxiliary variable) compares the distribution of the participating households to the distribution of the total eligible sample of households for several auxiliary variables. Survey base weights were used to account for the unequal within-household probabilities of selection, and replicate weights were used to adequately reflect the impact of the sample design on variance estimates. The weights for the follow-up respondents were adjusted to account for non-respondents to both the main survey and the follow-up. This assumes that non-respondents were more similar to the follow-up respondents than the original survey respondents. Together with the main sample respondent, the weights account for the entire eligible population. Adjustment cells were created using the Search software (WesSearch) using the same approach used in the normal weighting procedure.
To test for statistical differences, the distribution of the patent examination quality question for the wave respondents was compared with the distribution for follow-up respondents and similarly within the follow-up study for the white and colored postcards. To test the categorical responses, the hypothesis of independence between the characteristic and participation status was tested using a Rao-Scott modified Chi-square statistic at the 10 percent level. The average score of the categorical responses was computed as a continuous variable, with the larger average score the more favorable the response. The difference between means was tested using a t test. The continuous variables were tested using the Benjamin-Hochberg procedure to control the overall false discovery rate for a family of comparisons.
Westat analyzed the results of these studies and submitted reports to the USPTO. Some of the conclusions made concerning the survey were:
There are no statistically significant differences detected between the main survey and follow-up respondents in their categorical responses to the patent examination questions.
There are fairly large relative differences in both waves. These differences are not detectable due to the large standard errors of the estimates from the follow-up study. The responses were generally more positive for the follow-up.
For the average responses, the overall averages were not significant.
There are only a few significant differences by characteristic while controlling the overall false discovery rate using the B-H approach. It is expected that 10% of the difference would be significant by chance. In Wave 6, only one of the fifteen differences tested (6.7%) was significant, the sample domain for firms with less than 150 applications. In Wave 7, two of the fifteen differences tested (13.3%) were significant, agents and other registration numbers (those recently registered). For the Wave 19 and Wave 20 study, after adjusting for multiple comparisons, there were six significant differences, with all but one of those results (newest registered customers) revealing that follow-up respondents had a more favorable response.
Describe any tests of procedures or methods to be undertaken.
To ensure the survey questions are meaningful to respondents and easy to understand, Westat conducted four cognitive interviews with customers identified by the USPTO. These customers are similar to the sampled respondents for the Patents External Quality Survey study. The wording of the survey questions was then revised based on feedback from these customers.
Low response rates have typically been observed in previous customer surveys administered by the USPTO. The Patents External Quality Survey was designed to focus only on key aspects of examination quality to keep the time burden to a minimum and to help response rates.
After the online version of the survey is programmed, Westat will test the web survey internally to ensure respondents’ answers are properly captured and the survey is easy to navigate online. Westat will also ensure that all computer security requirements are met.
Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the Agency unit, contractor(s), or other person(s) who will actually collect and/or analyze the information for the Agency.
The Chief Patent Statistician Office of the USPTO is responsible for conducting the Patents External Quality Survey. Robyn Sirkis is the point of contact for this survey. Robyn can be reached by phone at 571-270-0935 or by email at [email protected]. The names and telephone numbers for the individuals from Westat who consulted on the statistical aspects of the survey and who are conducting the survey under the direction of the USPTO are:
Senior Study Director
Westat
240-453-2904
William Cecere
Senior Survey Statistician
(301) 294-4477
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Title | SF-12 SUPPORTING STATEMENT |
Author | United States Patent and Trademark Office |
File Modified | 0000-00-00 |
File Created | 2025-07-24 |