[Q&A] Risk-based Validation Considerations When Implementing Clinical Research Systems

By Susan Nonemaker-Cox | Senior Associate, Essex Management
August 25th, 2016

Summary: Susan Nonemaker-Cox of Essex Management answers attendee questions following her recent webinar on risk-based validation considerations.

Estimated reading time: 2 minutes

As a follow up to her recent webinar, “Risk-based Validation Considerations When Implementing Clinical Research Systems,” Essex Management’s Senior Associate, Susan Nonemaker-Cox, provides answers to attendee questions.

 

If a sponsor purchases an off-the-shelf system, such as an EDC, and validates it, does it also need to be certified by a third party?

 

Typically, there should be a set of procedures to determine a validation or quality conclusion prior to implementation, as well as on an ongoing basis (e.g., periodic reviews, internal/external audits). Exacting that a system is “certified” by a third party might be required if the sponsor has a policy or procedure in place that promotes this expectation. A sponsor might choose to have independent reviews of the system performed by an in-house group or leverage an independent third-party, external organization. The more layers of quality assurance (e.g., independent review and assessment) definitely provide for more confidence in the overall validation effort and validated state of the system. The sponsor determines whether or not it is a required activity.

 

Is the risk-based validation approach only applicable to sponsors, or can it also be implemented in a hospital setting?

 

The risk-based validation approach can be applied to sponsor organizations, clinical research sites and medical institutions. It would be ideal to start with an initial validation strategy planning session; documenting the scope of the validation (and related systems and assumptions), the regulatory compliance determination and risk analysis, who is involved and what artifacts will be developed during the effort.

 

During the presentation, you mentioned a risk-based approach to 21 CFR Part 11. Are there specific types of data that should be compliant with this regulation and would all systems used for clinical research need to be GCP compliant?

 

Some system implementations are more complex (e.g., upstream and/or downstream systems providing data and/or transforming data, various modules included in a “system”). Depending on the size and complexity of the system and related data, there might be a requirement to build a Master Validation Plan (“validation wrapper document”) and then individual validation plans/protocols detailed to each system or module specific to that area. By forming that upfront GxP determination and Risk Assessment, you can break-down the specific scope of the system(s) and data. There might be a business rationale to have stricter validation controls in support of more regulated data based on how the system is setup. This initial validation planning and assessment phase is critical, as it helps to determine (and document) the basic assumptions and parameters for validation. This information and results or recommendations can be documented in stand-alone documents and then summarized and referred to in the Master Validation Plan or Validation Plan(s). Having this documented upfront, you can focus on stricter documentation and validation controls around the areas of a system or data which require more attention.

About the Author

Susan has over 20 years working in clinical and research IT. She has managed international Electronic Data Capture (EDC) and Clinical Trial Management System (CTMS) implementations while working in a regulatory compliant environment. She also led the Computer Systems Validation practice for a New York-based management and IT consulting firm. Susan has directed projects across a wide variety of domains, including global business technology management, metrics and process reporting, software testing and release management, and computerized system validation. Susan acts as Essex’s CTMS Domain Lead, heads Essex’s Validation Practice, and serves as a Program Manager for the National Cancer Institute. Susan holds a Bachelors in Speech Communication with a concentration in Marketing from Shippensburg University.

Website: Essex Management

Want more great content?

If you enjoy articles like this one published on the Forte Clinical Research Blog, be sure to also subscribe to the Nimblify Blog.

Check out the Nimblify Blog

You May Also Be Interested In:

Checklist for Choosing an EDC System

Choosing the right EDC may seem like an overwhelming decision. Use this checklist to help you find the perfect system.

Read more »

21 CFR Part 11: Vendor Vs Sponsor Responsibilities – Q&A

Shannon Roznoski, answers questions following her recent webinar on the roles both vendor and sponsor play during the validation process of an EDC system.

Read more »

Leave a comment or ask a question:

(Your email address will not be published.)

Write for us!

Are you interested in writing about a topic in clinical research? We'd love to have you become a guest contributor for the Forte Clinical Research Blog.

Learn more