SUPPORTING STATEMENT
United States Patent and Trademark Office
Patents External Quality Survey
OMB Control Number 0651-0057
December 2015
Universe and Respondent Selection
The respondent pool for this survey is made up of the businesses, organizations, and individuals who frequently file patent applications. The USPTO plans to survey large, medium, and small-sized domestic corporations, universities and other non-profit research organizations, and independent inventors. Foreign entities will not be included in the sample frame.
The target population consists of individuals associated with USPTO top filers (e.g., firms at a given address who have filed six or more patent applications in the past year). The sample unit will be the USPTO-registered agents/attorneys associated with the top filers and also independent inventors that filed six or more patents in the past 12 months. The target population typically accounts for over 85% of all patent applications filed in a given fiscal year.
Procedures for Collecting Information
The Patents External Quality Survey will use a longitudinal, rotating panel design. The USPTO has developed a sampling plan which is included in this submission. The sampling plan also contains information about the respondent pool and the response rate.
The sample is drawn from a frame of USPTO customers, all of whom are either associated with a particular firm or are considered independent. There are six sampling domains for which different sampling rates are used. One of these six sampling domains is identified for each customer on the frame, using counts of the number of applications within each firm in conjunction with a count of agents associated with that firm. Then a sampling rate is computed for each domain.
The USPTO uses a rotating panel design for the sample, such that customers are assigned to waves (survey period) and then to one of two panels within each wave. The second panel from each wave is fielded in the subsequent wave, in addition to a new panel.
After being selected for two consecutive waves, customers must stay out of the sample for at least 18 months. Because of this 18 month leave of absence from the sample, it is necessary to control for when the old sample can rotate back into the sample. A complication is that there is some potential for panel conditioning effects from being in the old cycle. Therefore, to reduce the impact from the distributional differences between frames, newly sampled cases from old panels are spread out evenly across the new panels.
Following a pre-notification letter that is sent to all potential respondents informing them of the purpose of the survey and including instructions for completing the survey online, the USPTO’s survey contractor, Westat, will mail the survey to all of the sampled respondents. A personalized label will be inserted on the survey packet envelope in order to reach the specified respondent. The survey packet will contain the paper version of the questionnaire and a cover letter explaining that the USPTO is sponsoring the survey, that all responses will only be used for internal analysis, and that no identifying information will be linked to the results. The cover letter will also contain the username, password, and the 5-digit survey ID number. The electronic and paper surveys will mirror each other.
During the follow-up non-response prompting calls, Westat employees will use a script developed in collaboration with the USPTO. For all non-respondents a reminder postcard will be sent to encourage survey participation.
The survey packet will include the three-page questionnaire and a postage-paid pre-addressed return envelope. The cover letter will be printed on USPTO letterhead and signed by the Commissioner of Patents.
Methods to Maximize Responses
In order to maximize the number of responses received from the survey, the USPTO plans to follow several well-established survey procedures. First, all sampled respondents will receive a pre-notification letter signed by the Commissioner. The letter will explain the importance of the study and encourage respondent cooperation. Next, all sampled respondents will receive the paper survey in the mail. Follow-up contact will be made after the initial survey is sent. One week after the initial survey mailing, all non-respondents will be sent a thank you/reminder postcard in the mail. Two weeks after the initial survey is mailed, we will telephone all of the non-respondents to prompt them to answer either the paper or internet version of the survey. A script has been developed for these phone calls so that everyone conducting these interviews asks the same questions, in the same manner.
Historic response rates for the Patents External Quality Survey are shown below.
Wave Name |
Survey Reference Period |
Response Rate (weighted) |
FY10-Q1 |
October 2009 – December 2009 |
54% |
FY10-Q3 |
April 2010 – June 2010 |
52% |
FY11-Q1 |
October 2010 – December 2010 |
48% |
FY11-Q3 |
April 2011 – June 2011 |
46% |
FY12-Q1 |
October 2011 – December 2011 |
50% |
FY12-Q3 |
April 2012 – June 2012 |
54% |
FY13-Q1 |
October 2012 – December 2012 |
54% |
FY13-Q3 |
April 2013 – June 2013 |
55% |
FY14-Q1 |
October 2013 – December 2013 |
53% |
FY14-Q3 |
April 2014 – June 2014 |
54% |
FY15-Q1 |
October 2014 – December 2014 |
55% |
FY15-Q3 |
April 2015 – June 2015 |
47% |
In order to determine how the non-response bias affected the results of the survey, Westat conducted non-response follow-up studies during Waves 6 and 7, and Waves 19 and 20 for the survey. The USPTO was provided with the analysis of the findings. The objective of the study was to try to get a picture of how the non-respondents would have responded to the main survey if they had actually responded to the survey. The study was conducted because the non-responses can cause bias in the survey estimates, which is itself affected by the response rate to the survey and the differences between those who responded to the survey and those who did not respond.
As part of the studies, Westat sent a postcard to those who did not respond to the original survey and who now were rotating out of the survey sample. The postcard contained one question concerning overall examination quality that was asked in the original survey. This question was also asked in the original survey; the only difference between the two was that the follow-up question had an additional answer that was not included in the original survey. Half of the sample in the study received a white postcard and the other half received a colored postcard to see if the colored card would help increase the response rates.
The follow-up studies compared the responses to the overall examination quality question between those who responded to the question in the original survey in the outgoing panel with those that responded to the follow-up postcard (who were also in the outgoing panel). In these studies, it is assumed that the respondents to the follow-up survey are like the non-respondents to the original survey and that there are indications that the survey non-responses are causing a potential bias. The results of the follow-up studies were used to help answer the following questions:
How different are the Wave respondents from the follow-up respondents?
How different are the follow-up respondents from the follow-up non-respondents?
Do the results impact what can be done in weighting to reduce the bias due to non-response?
What is the impact of the colored postcard on the follow-up response rates?
Non-response bias is affected by two different items: the non-response rate and the differences between respondents and non-respondents. While the response/non-response rate is known, the differences between those who respond and those who do not is unknown. The follow-up study attempts to measure the difference between the respondents and non-respondents. The non-response bias is calculated using the following equation for a sample mean:
where is the weighted unit response rate, is the population mean of the respondent stratum, and is the population mean of the respondent stratum and is the population mean for the non-respondent stratum. While the response rate is universally recognized as a measure of survey quality, the difference between the respondents and non-respondents is just as important in determining the non-response bias. Weighting adjustments are used to reduce the non-response bias (although some non-response bias will remain in the survey estimates).
However, in the case with the non-response follow-up sample, the bias can be written as
where is the population mean of the follow-up non-respondent stratum, and is the population mean for the follow-up respondent stratum.
A bivariate analysis (response indicator versus each auxiliary variable) compares the distribution of the participating households to the distribution of the total eligible sample of households for several auxiliary variables. Survey base weights were used to account for the unequal within-household probabilities of selection, and replicate weights were used to adequately reflect the impact of the sample design on variance estimates. The weights for the follow-up respondents were adjusted to account for non-respondents to both the main survey and the follow-up. This assumes that non-
respondents were more similar to the follow-up respondents than the original survey respondents. Together with the main sample respondent, the weights account for the entire eligible population. Adjustment cells were created using the Search software (WesSearch) using the same approach used in the normal weighting procedure.
To test for statistical differences, the distribution of the patent examination quality question for the wave respondents was compared with the distribution for follow-up respondents and similarly within the follow-up study for the white and colored postcards. To test the categorical responses, the hypothesis of independence between the characteristic and participation status was tested using a Rao-Scott modified Chi-square statistic at the 10 percent level. The average score of the categorical responses was computed as a continuous variable, with the larger average score the more favorable the response. The difference between means was tested using a t test. The continuous variables were tested using the Benjamin-Hochberg procedure to control the overall false discovery rate for a family of comparisons.
Westat analyzed the results of these studies and submitted reports to the USPTO. Some of the conclusions made concerning the survey were:
There are no statistically significant differences detected between the main survey and follow-up respondents in their categorical responses to the patent examination questions.
There are fairly large relative differences in both waves. These differences are not detectable due to the large standard errors of the estimates from the follow-up study. The responses were generally more positive for the follow-up.
For the average responses, the overall averages were not significant.
There are only a few significant differences by characteristic while controlling the overall false discovery rate using the B-H approach. It is expected that 10% of the difference would be significant by chance. In Wave 6, only one of the fifteen differences tested (6.7%) was significant, the sample domain for firms with less than 150 applications. In Wave 7, two of the fifteen differences tested (13.3%) were significant, agents and other registration numbers (those recently registered). For the Wave 19 and Wave 20 study, after adjusting for multiple comparisons, there were six significant differences, with all but one of those results (newest registered customers) revealing that follow-up respondents had a more favorable response.
Testing of Procedures
To ensure the survey questions are meaningful to respondents and easy to understand, Westat conducted four cognitive interviews with customers identified by the USPTO. These customers are similar to the sampled respondents for the Patents External Quality Survey study. The wording of the survey questions was then revised based on feedback from these customers.
Low response rates have typically been observed in previous customer surveys administered by the USPTO. The USPTO believes that offering both a paper and a web response option will enhance response rates for this effort. The Patents External Quality Survey was designed to focus only on key aspects of examination quality to keep the time burden to a minimum and to help response rates.
When sending the mail survey out to sampled customers, we will use the well-established procedures documented by Dillman (2002). After the online version of the survey is programmed, Westat will test the web survey internally to ensure respondents’ answers are properly captured and the survey is easy to navigate online. Westat will also ensure that all computer security requirements are met.
Contact for Statistical Aspects and Data Collection
The Office of Patent Quality Assurance of the USPTO is responsible for conducting the Patents External Quality Survey. Martin Rater is the point of contact for this survey and can be reached by phone at 571-272-5966 or by e-mail at [email protected]. The names and telephone numbers for the individuals from Westat who consulted on the statistical aspects of the survey and who are conducting the survey under the direction of the USPTO are:
Jennifer O’Brien
Senior Study Director
Westat
(301) 251-4272
Shelley Brock-Roth
Senior Survey Statistician
Westat
(301) 517-8042
File Type | application/msword |
File Title | SF-12 SUPPORTING STATEMENT |
Author | United States Patent and Trademark Office |
Last Modified By | Hall, Drew (AMBIT) |
File Modified | 2015-12-02 |
File Created | 2015-12-02 |