[Recap] Metrics Workshop – Empowering Sites With Data

By Laura Hilty | Vice President, Product Management & Operations, Nimblify, Inc.
September 25th, 2014

Laura Hilty Forte Research Systems

Summary: The unique Metrics Workshop portion of the Operations Forum was filled with great discussions and takeaways on the power of data in site-sponsor partnerships.

Estimated reading time: 4 minutes

 

Metrics Workshop

Attendees participated in breakout discussions during the Metrics Workshop portion of the Operations Forum.

During the Clinical Research Operations Forum a few weeks ago, Forte facilitated a half-day Metrics Workshop that centered on the rapid advancement of data-driven improvement of site performance. This particular workshop brought together about 60 people who were eager to share both the triumphs and woes in their journey to discovering and using metrics at their organizations. Participants represented academic organizations, cancer centers, and a handful of dedicated research sites and community hospitals from across the country. In the spirit of collaboration, these attendees joined forces in the discussion that covered everything from how metrics have positively affected their operations, which metrics are the most meaningful to them, how to use the data effectively both internally and externally, and where to even begin with performance metrics.

(Related article: Beginner’s Guide to Clinical Trial Performance Metrics)

Throughout the workshop there seemed to be plenty of people who had ‘aha!’ moments, and a pattern of empowerment emerged.

One of those moments was in realizing the strengths that academic organizations bring to industry trials. Though the academics are notorious for their IRB timelines taking longer than community sites (and they are reminded about this often), the activities that are done in parallel typically make up for that time, making overall study start-up cycle times actually comparable.

Second, the level of quality of their conduct – what they produce and how they treat their patients – is second to none. They are reliable. The thought of showing this type of information to sponsors and CROs in the form of process maps and data to prove their quality seemed to be a brand new idea that resonated with several people. Many thought of this as a major takeaway they could begin to incorporate right away into their conversations with sponsors in the site selection phase.

(Related article: Pinpoint and Market Your Competitive Advantages with Metrics)

Another moment of revelation was recognizing the ability for the site to actually set the tone for the partnership between the site and the sponsor. Understandably, sponsors and CROs have high standards for the sites that they bring onto a trial, and as such there are certain goals to which they track the sites’ performance. Sponsors calculate metrics based on several activities throughout the lifecycle of the protocol, which are likely summarized and reported on in their organizations. While the sites are typically aware of when things are delayed, there is a common level of frustration because they know they are being ‘dinged’ by the sponsor for the very things that are being delayed because of them. There were several examples of a lack of responsiveness toward sites and delays that the sites could not control.

When digging into this further, some sites shared that they also set expectations for CROs and sponsors. One example is the expected turnaround time for contracts and budgets back and forth. They set this expectation upfront, and some even put it directly into their contracts. Some sites that weren’t doing this realized they could track just one more piece of information to easily identify when something was delayed due to the CRO/sponsor. Going forward, they will be able to use that information to proactively follow up with the sponsor, clearly stating the expected date of delivery based on the agreed upon turnaround time. Some had shared that if a CRO involved is truly non-responsive, sharing this e-mail chain with the sponsor actually helps the sponsor understand how dedicated the site is to the success of the trial. However, conversations like this should be treaded carefully and should only be had if sites are fulfilling their commitments. And of course, sites will have a stronger case when they have good data to back them up.

(Related article: Top 5 Metrics Your Site Should be Using Today)

All in all, two themes became clear to me during these conversations. First, is the power of data. Second, is the power of using it for good. The ‘aha!’ moments that enabled empowerment of the sites were due first to having good data to back them up. If this is not available, the power is not real. Second, using the data for good is crucial. Instead of waiting around for sponsors, sites can take control and set the tone for the relationship. Data is an objective way to make that conversation easy.

The partnership between sponsors and sites is exactly that – a partnership, not a power struggle or just a report card. I truly believe that both sites and sponsors have the best of intentions, and generally people just want to do their jobs well. Using data to help the conversation, to inform each other and help create an efficient, effective method and partnership that will work well in the long-term, is in my mind the ultimate in the use of metrics.

Those who attended the workshop are on their way to realizing what their side of the partnership can look like. To do this and support metrics initiatives at their organizations, many are utilizing automated metrics and tools through the use of Nimblify Site Benchmarks, which is free and open to all sites.

In Site Benchmarks, the Site Insights Dashboard allows sites to contribute their operational data to gain insight and visualize their performance, compare their metrics to anonymous, aggregated data from peer organizations, and pinpoint their strengths and areas for improvement. Sites can use the visuals in Site Benchmarks as data-driven evidence to show sponsors/CROs how similar trials have been performed in the past.

Are you a site that’s interested in Site Benchmarks? Take a quick tour to see how it works.

As of December 2016, Research Resonance Network (RRN) has officially been renamed to Nimblify Site Benchmarks. Read the press release to learn more.

About the Author

Laura Hilty is the VP of Product Management and Operations for Nimblify. With a passion for creating more efficient processes throughout clinical research, Laura’s work is focused on leading efforts to create efficiencies for all stakeholders (participants, sites, sponsors, and CROs) in the clinical research ecosystem. Prior to Nimblify, Laura spent six years at Forte Research Systems implementing systems at large academic medical centers and cancer centers; and then launching several new products, including Allegro CTMS, Overture EDC, Research Resonance Network and more.

Website: Nimblify, Inc.

Want more great content?

If you enjoy articles like this one published on the Forte Clinical Research Blog, be sure to also subscribe to the Nimblify Blog.

Check out the Nimblify Blog

Leave a comment or ask a question:

(Your email address will not be published.)

Write for us!

Are you interested in writing about a topic in clinical research? We'd love to have you become a guest contributor for the Forte Clinical Research Blog.

Learn more