CCWIS Design Requirements Self-Assessment Scoring Sheet 2021-04-26.xlsx

Generic Clearance for the Comprehensive Child Welfare Information System (CCWIS) Review and Technical Assistance Process

CCWIS Design Requirements Self-Assessment Scoring Sheet 2021-04-26.xlsx

OMB: 0970-0568

Document [xlsx]
Download: xlsx | pdf

Overview

Method
Category 1
Category 2
Category 3
Category 4
FINAL


Sheet 1: Method

CCWIS DESIGN REVIEW - ASSESSMENT METHODOLOGY AND FINAL RATING CALCULATION

METHOD

Conformance Indicator Assessment Rating
·         None (0)
·         Little Extent (1)
·         Moderate (2)
·         Large Extent (3)

Aggregated conformance indicator scores will be calculated for each Category, and these Category Scores will be used to calculate a Final Rating that represents the automated function’s overall level of conformance with CCWIS design requirements. The calculation methodology is presented below. At the end of the calculated Final Rating, the level of conformance is based on a scale:
Final Rating Scale
·         Unsatisfactory (< 50%)
·         Needs Work (51%-71%)
·         Satisfactory (72%-80%)
·         Exemplary (> 80%)

A Final Rating below 72% indicates an unacceptable level of conformance that may require the project take corrective measures to achieve conformance. Should the project not address the level of conformance, ACF may designate the agency’s child welfare information system implementation a non-CCWIS.

The assessment method incorporates ACF-assigned weights, corresponding to relative priority, for each conformance indicator in Categories 1 - 4. Note that a conformance indicator with a weight of zero is considered not applicable, and will not be used in or affect the final conformance rating calculation:

ACF-Defined Priority Factor for Each Conformance Indicator
·         Not applicable/not available (0)
·         Low (1)
·         Medium (2)
·         High (3)

ACF also assigned weights, corresponding to relative priority, to each Category:
ACF-Defined Priority Factor for Each Category
·         Category 1: 1355.53(a)(1) Modular Design Requirements – 30%
·         Category 2: 1355.53(a)(2) Plain Language Requirements – 15%
·         Category 3: 1355.53(a)(3) Development Standards Requirements – 25%
·         Category 4: 1355.53(a)(4) Share, Leverage, Reuse Requirements – 30%

CALCULATION

Step 1 – Calculate the Category Scores – Multiply each conformance indicator assessment rating in the category by its assigned weight to calculate the weighted assessment scores for the indicators. Sum the weighted assessment scores. This is the category’s Total Assessment Score. Next, calculate the Maximum Possible Score: multiply the total number of indicators assessed by 3 (highest assessment rating possible). Divide the Total Assessment Score by the Maximum Possible Score for the total Category Score:
Category Score = Total Assessment Score/Maximum Possible Score

Repeat Step 1 for each category.

Step 2 –Calculate the Weighted Category Scores – Multiply each Category Score by its ACF-defined priority factor for its Weighted Category Score:
Weighted Category 1 Score = Category 1 Score X .30
Weighted Category 2 Score = Category 2 Score X .15
Weighted Category 3 Score = Category 3 Score X .25
Weighted Category 4 Score = Category 4 Score X .30

Step 3 – Calculate the Final Rating – The Final Rating Score is the sum of all four weighted category scores is the automated function’s total percent level of conformance to design requirements. The Final Rating Score is measured against the Final Rating Scale to determine if the automated function complies with CCWIS design requirements.

Sheet 2: Category 1

ITEM # Subcategory Conformance Indicators for 1355.53(a)(1) Modular Design; Category 1 Weight = 30% Notes to Reviewers Assigned Weight Assessment Guidelines Assessment (0-3) Assessment Score (Weight x Assessment) Maximum Possible Score (Weight x3) Observations During Review Agency Comments CB Response
C1-1 Architectural Pattern The CCWIS or automated function institutes an architectural pattern that incorporates an 'n-tier' layered design or other structured topology specifying architecture components with clear roles, responsibilities, and relationships. A traditional "n-tier" layered architecture is a reasonable architectural pattern for states to use in designing their application, but it isn't the only one. Other architectural topologies are also divided into different architectural components with different responsibilities, but use different terminology. A microservices architecture, for example, would have UI, API, and Service Component layers (and the Service Components themselves might have multiple layers). 2 0 = unstructured; no architectural pattern
1 = inconsistently or insufficiently structured architecture
2 = somewhat consistently and sufficiently structured architecture
3 = well-structured architecture with at least three distinct layers (presentation, business logic, and data access) or architectural components

0 6


C1-2 Business Rules The CCWIS business rules are separated from the core programming. In the context of the overall architectural topology, business rules should be segregated into a separate layer. 2 0 = business rules not at all separated from programming logic
1 = little separation of business rules from programming logic
2 = some separation of business rules from programming logic
3 = complete separation and independent management of business rules

0 6


C1-3 Rules Engine The agency uses a business rules engine to define the business rules for the CCWIS automated functions. Implementing a rules engine facilitates separation of business rules from core programming, and facilitates management of rules. Sometimes done with a domain-specific language (DSL), COTS rules engine, or as custom programming against collection of rules 1 0 = business rules not defined as a collection or otherwise managed distinct from programming logic
1 = little explicit identification and evaluation of business rule collections to determine behavior
2 = some explicit identification and evaluation of business rule collections to determine behavior
3 = integration of business rules engine component/product and associated API, DSL, or GUI

0 3


C1-4 Testing A set of unit tests are present to verify implementation of business rules. Comprehensive test coverage doesn’t mean that all methods need to be tested. In fact, most plans aim for about 80% coverage. If this is too high, it can make code refactoring difficult.

Testability is also a function of various architectures (e.g., some patterns are inherently easier to test because other layers can be mocked or stubbed)
1 0 = no indication of unit testing
1 = inconsistent or insufficient unit test coverage
2 = less than 70% unit test coverage
3 = greater than 70% unit test coverage

0 3


C1-5 Coupling The automated function has been designed with clear boundaries. Having clear boundaries for the automated function better defines the function itself and makes it easier to sever, replace, independently configure, and share. 3 0 = automated function is not distinctly separated, lacks clear responsibilities, and requires an understanding of other modules
1 = few clear or consitent boundaries between the automated function and other modules
2 = somewhat clear and consistent boundaries between the automated function and other modules
3 = automated function is distinct, with clear responsibilities; no knowledge of other modules is required

0 9


C1-6 Coupling The automated function does not require other automated functions to perform its tasks. An automated function will often be designed to function within the context of an overall CCWIS architecture. It may also have additional external dependencies, but should generally function independent of other automated functions. 1 0 = automated functions cannot function independently, and dependencies not specified
1 = significant dependency on other modules, and dependencies not clearly identified
2 = little dependency on other modules
3 = autonomous, independent automated function, with explicit external dependencies

0 3


C1-7 Coupling The automated function efficiently communicates with other automated functions within the CCWIS.
1 0 = unstructured and unmanaged communication interfaces
1 = some inconsistent or inefficient communication paths
2 = mostly consistent and efficient communication paths
3 = clear and effective interfaces between components

0 3


C1-8 Coupling Identified automated function is easily severable from CCWIS.
3 0 = no clear means of severing automated function
1 = significant parts of automated function may not be severable
2 = some parts of automated function may not be severable
3 = clear means of severing automated function

0 9


C1-9 Cohesion The identified automated function reflects a discrete, easily defined purpose that does not significantly overlap with any other automated function within the CCWIS. There is a trade-off between modularity and duplication; limited overlap of functionality may be justified in some circumstances to reduce coupling and dependencies. 2 0 = automated function lacks a well-defined purpose or incorporates substantial unrelated, disparate, or duplicative functionality
1 = automated function includes significant unrelated or overlapping functionality
2 = automated function includes some unrelated or overlapping functionality
3 = automated function has a clear purpose and set of unique functions to support that purpose

0 6


C1-10 Cohesion The automated function’s functionality is designed to meet the needs of a business function performed by the agency.
1 0 = automated function doesn't address a business need
1 = automated function not well aligned to business needs
2 = automated function mostly aligned to business needs
3 = automated function clearly addresses business needs

0 3


C1-11 Cohesion Members from the agency (and their business partners) who perform the business function, being supported by the automated function, were given an opportunity to participate in designing the automated function.  
1 0 = design occurred without agency and business partner input
1 = agency and business partner users had limited impact on the design
2 = agency and business partner users had significant impact on the design
3 = agency and business partner users were actively engaged in the design

0 3


C1-12 Computer Generated The agency uses automated tools to generate code in the CCWIS.
1 0 = no automated code generation
1 = limited automated code generation
2 = some code generation associated with templates and frameworks
3 = extensive code generation associated with well-established, standardized frameworks

0 3





19
0 57 0.00 Weighted Category Score = Total Assessment / Maximum Possible

Sheet 3: Category 2

ITEM # Subcategory Conformance Indicators for 1355.53(a)(2) Plain Language; Category 2 Weight = 15% Assigned Weight Assessment Guidelines Assessment (0-3) Assessment Score (Weight x Assessment) Maximum Possible Score (Weight x3) Observations During Review Agency Comments CB Response
C2-1 Plain Writing Agency staff writes the topic with a familiarity to the audience, defining why the audience needs this document, and for all levels of staff to understand. (Know your audience) 3 0 = audience not well understood or topic not written for relevant audiences
1 = some audiences not addressed
2 = most audiences identified and effectively addressed
3 = all audiences identified and effectively addressed

0 9


C2-2 Plain Writing The document is organized to provide clear and concise points. (Organize your thoughts) 2 0 = document not clearly organized
1 = multiple parts of document need additional organization
2 = several parts of document need additional organization
3 = document well organized; thoughts clearly and concisely communicated

0 6


C2-3 Plain Writing Documentation uses formatting, headings, lists, tables and other visual cues to create a structure that enables easier location of information and better engagement of readers. (Summarize main points) 2 0 = document not clearly structured; ineffective use of headings and other visual cues
1 = many parts of document need restructuring for effective use of headings and other visual cues
2 = several parts of document need restructuring for effective use of headings and other visual cues
3 = document well-structured using headings and other visual cues

0 6


C2-4 Plain Writing Documentation is comprised of concise sentences. Documentation provides an initial context for the ideas that will be discussed and incorporates definitions into the text. The paragraphs are simple with one topic sentence and one idea developed throughout the paragraph. (Write short sentences and paragraphs) 1 0 = document sentences and paragraphs poorly constructed
1 = a majority of sentences and paragraphs need editing for effective communication of ideas
2 = some sentences and paragraphs need editing for effective communication of ideas
3 = document sentences and paragraphs constructed effectively for clear and concise communication of ideas

0 3


C2-5 Plain Writing Documentation speaks to the audience (at all levels of expertise) and does not use extraneous words in Documentation construction. (Use every day phrases and words)

Documentation does not use extraneous words in Documentation construction. (Use every day phrases and words)
1 0 = document is overly verbose and uses uncommon language
1 = document needs significant editing to eliminate verbosity
2 = document needs some editing to eliminate verbosity
3 = document is concise and uses common words and phrases

0 3


C2-6 Plain Writing Documentation does not include or limits the use of technical jargon, does not use abbreviations and explains acronyms. (Do not include or limit any technical jargon) 2 0 = document includes extensive abbreviations, and unexplained acronyms, and technical jargon
1 = document needs significant editing to remove jargon and abbreviations, and explain acronyms
2 = document needs some editing to remove jargon and abbreviations, and explain acronyms
3 = document is free of abbreviations, jargon, and unexplained acronyms

0 6


C2-7 Plain Writing Documentation is composed with strong subjects and verbs, it uses active voice where possible and keeps the sentence structure simple. (Use strong subjects and verbs) 1 0 = document sentences have unclear subjects and verbs, use passive voice, and complex structures
1 = many sentences need editing for sentence structure
2 = some sentences need editing for sentence structure
3 = document sentences have clear subjects and verbs, use active voice, and simple structures

0 3


C2-8 Plain Writing Documentation defines uncommon terms in the body of Documentation as well as within a glossary. (Define uncommon terms) 2 0 = document includes no glossary and uncommon terms are used without defining them within the text
1 = document needs many uncommon terms defined within the text and added to a glossary
2 = document needs some uncommon terms defined within the text and added to a glossary
3 = document defines uncommon terms both within the text and in a glossary

0 6


C2-9 Plain Writing Documentation is free of grammatical error. (Proof-read and editing) 2 0 = document has extensive grammatical issues
1 = document require significant proof-reading and editing for grammatical issues
2 = document requires some proof-reading and editing for grammatical issues
3 = document is free of grammatical error

0 6





16

0 48 0.00 Weighted Category Score = Total Assessment / Maximum Possible

Sheet 4: Category 3

ITEM # Subcategory Conformance Indicators for 1355.53(a)(3) Design Standards; Category 3 Weight = 25% Assigned Weight Assessment Guidelines Assessment (0-3) Assessment Score (Weight x Assessment) Maximum Possible Score (Weight x3) Observations During Review Agency Comments CB Response
C3-1 Used and Adhered to The agency developed and conducted a process for evaluating adherence to design and development standards. 3 0 = no developed process for evaluating adherence to standards
1 = partially developed process for evaluating adherence to standards
2 = assessed adherence to standards based on partially developed evaluation process
3 = assessed adherence to standards based on mature evaluation process

0 9


C3-2 Used and Adhered to The agency acquired or leveraged autonomous quality management (QM) or independent verification and validation (IV&V) services to monitor the project during development. 1 0 = no QM or IV&V services acquired or leveraged
3 = QM or IV&V services acquired or leveraged

0 3


C3-3 Used and Adhered to The agency adheres to its design and development standards for the period under review. 2 0 = no evaluation, or little to no adherence to standards
1 = inconsistent or low adherence to standards
2 = moderate adherence to standards
3 = high adherence to standards

0 6


C3-4 Used and Adhered to The agency trains staff on what standards are used and where they can be found. 3 0 = no evidence of standards training
1 = inconsistent or incomplete standards training
2 = mostly consistent and effective standards training
3 = highly consistent and effective standards training

0 9


C3-5 Used and Adhered to The agency performs code reviews to determine the quality of the code produced. 2 0 = no code reviews
1 = irregular or inconsistent code review process
2 = standardized code review process
3 = mature, standardized, and fully-integrated code review process

0 6


C3-6 Used and Adhered to The agency confirms adherence to design and development standards during internal project and code reviews. 3 0 = no reviews, or reviews do not evaluate adherence to standards
1 = reviews rarely or irregularly evaluate adherence to standards
2 = reviews often evaluate adherence to standards
3 = reviews consistently and effectively evaluate adherence to standards

0 9


C3-7 Documentation The agency maintains written documentation of the software design and development standards used for automated functions designed for the CCWIS. 3 0 = no documentation of standards
1 = some limited or incomplete documentation of standards
2 = signficant documentation of standards
3 = comprehensive documentation of standards

0 9


C3-8 Documentation Data sharing agreements are based on agency data exchange standards. 0 0 = no explicit data sharing agreements, or data sharing agreements not based on agency data exchange standards
1 = data sharing agreements inconsistent with data exchange standards
2 = data sharing agreements mostly consistent with data exchange standards
3 = data sharing agreements fully consistent with data exchange standards

0 N/A


C3-9 Documentation Standards used for automated functions are based on state, tribal, and/or industry-defined standards. 2 0 = automated function standards are not established or are not based on state, tribal, and/or industry standards
1 = automated function standards inconsistently or partially based on state, tribal, and/or industry standards
2 = automated function standards mostly based on state, tribal, and/or industry standards
3 = automated function standards derived and mapped to referenced state, tribal, and/or industry standards

0 6


C3-10 Documentation The agency maintains written documentation of the standards on commercial-off-the-shelf (COTS), or software-as-a-service (SaaS) automated functions, if applicable. 0 0 = no standards maintained for applicable COTS or SaaS components
1 = standards inconsistently maintained, or maintained for few applicable COTS or SaaS components
2 = standards maintained for most applicable COTS or SaaS components
3 = standards consistently maintained for all applicable COTS or SaaS components


N/A


C3-11 Efficient/ Economical/ Effective The automated function functions as designed. 3 0 = functionality not at all consistent with documented design
1 = some functionality not consistent with documented design
2 = most functionality consistent with documented design
3 = functionality fully consistent with documented design

0 9





22

0 66 0.00 Weighted Category Score = Total Assessment / Maximum Possible

Sheet 5: Category 4

ITEM # Subcategory Conformance Indicators for 1355.53(a)(4) Shared, Leveraged, and Reused; Category 4 Weight = 30% Notes to Reviewers Assigned Weight Assessment Guidelines Assessment (0-3) Assessment Score (Weight x Assessment) Maximum Possible Score (Weight x3) Observations During Review Agency Comments CB Response
C4-1 Share Automated function is easily identifiable via a unique name that does not conflict with an existing project and does not infringe on trademarks.
3 0 = automated function not clearly identified
1 = name of automated function clearly identified
2 = name of automated function clearly identified, but may conflict with or be confused with that of another project
3 = automated function clearly and uniquely identified

0 9


C4-2 Share The source, contributor, and points-of-contact for the identified automated function are clearly specified. This information might be included in documentation or provided as reference data C-SWAP. This indicator considered N/A until procedures are available established for C-SWAP. 0 0 = source, contributors, or POCs not identified
1 = limited source, contributors, or POCs identified
2 = most source, contributors, or POCs identified
3 = source, contributors, and POCs fully identified

0 N/A


C4-3 Share Product status, version information and release notes for the automated function are provided.
1 0 = status, version, and release notes not specified
1 = limited status, version, and release information provided
2 = most status, version, and release information provided
3 = status, version, and release notes fully and clearly specified

0 3


C4-4 Share Automated function licensing information is provided. Typically included as a text file along with the code. It's not required for public domain code. Libraries typically have a license file as well. For example, Lesser GNU Public License (LGPL) type information used for open source code 1 0 = no licensing information provided
1 = limited licensing information provided
2 = licensing information provided for most components
3 = licensing information provided for all components

0 3


C4-5 Share A product README file and links to more comprehensive documentation for the automated function are provided. (A README file is usually a simple plain text file that contains information about other files in a directory or archive of computer software.) The README should provide an overview of the automated function's purpose, architecture, design, system requirements, installation, and configuration. Links to various artifacts such as system design documentation, user guides, administration manuals, roadmaps, and API documentation may be included. 3 0 = no README or equivalent information included with automated function code
1 = limited README information included with automated function code
2 = README information and linked information included with automated function code
3 = automated function includes effective README with links to comprehensive documentation code

0 9


C4-6 Share Automated function is accompanied by information describing the process and plans for maintaining, updating, and ending support for code. Typically included in roadmap or similar documentation. 3 0 = no planning information is included with automated function code
1 = limited planning information provided with automated function code
2 = significant planning information provided with automated function code
3 = comprehensive planning information provided with automated function code

0 9


C4-7 Share An issue queue is available to view and track progress on known bugs, enhancement requests, and other issues.   This indicator considered N/A until procedures are available established for C-SWAP. 0 0 = no means of viewing or tracking issues
1 = limited issue information provided
2 = significant issue tracking capability available
3 = detailed issue tracking system available for automated function

0 N/A


C4-8 Share Communication channels and feedback mechanisms are available to allow automated function recipients to query maintainers and get answers to questions.   This indicator considered N/A until procedures are available established for C-SWAP. 0 0 = no communication channels or feedback mechanisms available
1 = limited communication channels or feedback mechanisms (e.g., published email address) available
2 = multiple communication channels or feedback mechanisms available
3 = multiple, clear, and effective communication channels and feedback mechanisms

0 N/A


C4-9 Share Identified automated function subsumes features that may be enabled, disabled, configured, or removed.
3 0 = no clear means of enabling, disabling, configuring, or removing features
1 = limited ability to enable, disable, configure, or remove features
2 = ability to enable, disable, configure, or remove many features
3 = extensive ability to enable, disable, configure, or remove features

0 9


C4-10 Share Identified automated function is accompanied by evidence, such as test plans and results, of comprehensive testing.  This conformance indicator ensures that those receiving the shared automated function can ascertain the degree to which it is well-tested, and can understand where existing problems remain. 1 0 = no evidence of testing
1 = limited test information available
2 = significant test information available
3 = extensive test information available, including evidence of comprehensive test coverage

0 3


C4-11 Leverage Automated function is accompanied by comprehensive documentation on features and functionality. Typically included in the form of end-user documentation, reference guides, and administration manuals. 2 0 = no information on features and functionality is provided
1 = limited information on features and functionality is provided
2 = sigificant information on features and functionality is provided
3 = comprehensive information on features and functionality is provided

0 6


C4-12 Leverage Automated function is accompanied by reports describing the results of performed vulnerability testing.
1 0 = no evidence of vulnerability testing
1 = limited vulnerability-testing information available
2 = significant vulnerability-testing information available
3 = extensive, detailed vulnerability test information available

0 3


C4-13 Leverage Automated function is assessed against relevant security and privacy controls such as the National Institute of Standards and Technology Special Publication 800-53 (NIST SP 800 53). Adjusting since SP 800-53 is a federal rather than state requirement 1 0 = no evidence of controls assessment
1 = limited evidence of controls assessment
2 = significant controls assessment information available
3 = evidence of comprehensive security and privacy controls assessment

0 3


C4-14 Leverage Automated function is accompanied by a software installation plan (SIP) or other documentation detailing system requirements and installation procedures.
3 0 = no SIP or similar documentation
1 = limited installation information provided
2 = significant installation information provided
3 = comprehensive SIP or similar documentation included

0 9


C4-15 Leverage Automated function is accompanied by documentation detailing required and recommended configuration information.
3 0 = no configuration information provided
1 = little configuration information provided
2 = significant configuration information provided
3 = comprehensive configuration documentation included

0 9


C4-16 Leverage Available documentation details external interfaces and integration points to allow system integrators to incorporate and leverage the automated function.
3 0 = no external interface information provided
1 = limited information on external interfaces provided
2 = significant information on external interfaces provided
3 = comprehensive documentation of external interfaces and integration points included

0 9


C4-17 Leverage Automated function is accompanied by an administration manual or procedures to facilitate effective system administration.
3 0 = no administration procedures provided
1 = little administration information provided
2 = significant administration information provided
3 = comprehensive administration information included

0 9


C4-18 Reuse Automated function is architected to leverage established software frameworks and established, industry-standard underlying design patterns. May include application frameworks based on high-level design patterns (e.g., MVC and MVVM). 2 0 = no use of established software frameworks; no clear design patterns, or antipatterns
1 = limited use of frameworks and design patterns
2 = significant use of frameworks and design patterns
3 = effective and appropriate use of established frameworks and design patterns

0 6





33

0 99 0.00 Weighted Category Score = Total Assessment / Maximum Possible
























Sheet 6: FINAL

FINAL RATING




Category Category Score
(Column 1)
ACF Defined Priority Factor for each Category (Column 2) Calculation
(Column 1 x Column 2)


1 0.00 0.30 0.00

2 0.00 0.15 0.00

3 0.00 0.25 0.00

4 0.00 0.30 0.00

Final Rating Score:

0.00 or 0%


















Final Rating Scale:




·         Unsatisfactory (< 50%)




·         Needs Work (51%-71%)




·         Satisfactory (72%-80%)




·         Exemplary (> 80%)




File Typeapplication/vnd.openxmlformats-officedocument.spreadsheetml.sheet
File Modified0000-00-00
File Created0000-00-00

© 2024 OMB.report | Privacy Policy