During Forte’s recent webinar “Top 3 Challenges in Clinical Research and How to Address Them,” Wendy Tate, Director of Data Analytics at Forte, discussed findings from Forte’s recent survey on the state of today’s clinical research industry. She outlined some of the most pressing challenges identified, noting these problems are not isolated to particular roles or organizations, but effect the entire industry. Here, Wendy provides thorough responses to attendee questions on methods for alleviating these problems, including performance metrics, effort tracking and more.
What are your thoughts on ways to address the gap between the expected maximum accrual and the real accrual on site?
During the presentation we discussed this as a performance metric that could be utilized to assist with improving pain associated with clinical trial accrual as well as managing finances AND allocating staff resources. By properly estimating your proposed clinical trial accrual, the amount of staff effort and physical resources can be more accurately estimated to allow for proper dedication to complete the study in the necessary timeline. This allows for proactive management of resources. By knowing those needed resources, the amount of monies can be estimated to recuperate the associated costs (including staff salary). This assists with maintaining a positive cash flow. Finally, a more realistic estimation of clinical trial accrual means that those potential subjects can actually be identified and recruited, assisting with meeting accrual goals.
When it comes to tools to help predict the anticipated expected maximum accrual, clinical research is definitely behind other industries in this field. This happens to be a research interest of mine and a portion of my dissertation. When I was at my previous institution, we did come up with a systematic method to more accurately predict clinical trial accrual (Tate WR, Cranmer LD. J Natl Compr Canc Netw. 2016 May;14(5):561-9). Another noteworthy paper on predicting clinical trial accrual, from the participant pool perspective comes from Dr. London (London JW, Balestrucci L, Chatterjee D, Zhan T. J Am Med Inform Assoc. 2013 Dec;20(e2):e260-6. doi: 10.1136/amiajnl-2013-001846. Epub 2013 Jul 14). However, more research should be done in this field.
In my position at Forte, I work with my team to develop meaningful tools to fill this need for solutions and help research organizations make more informed decisions. One of those tools is Forte Insights, a solution to help institutions visualize performance metrics and gain a better understanding of both their current research performance and potential future outcomes.
How do you determine which performance metrics to measure?
Performance metrics should align with your organizational goals. That way, there is incentive to invest in measuring those metrics and purpose in doing so. So, the first step is to identify your program goals. For example, your goal may be to reduce the time it takes to activate a clinical trial.
Once you have a goal, you need to define it. What does the study activation process entail? When does it start and when does it end? Let’s say it starts when the PI (or research team) says they are going to open the protocol and it ends when the protocol is open for accrual. Great, now you have start and end points.
Next, you need to measure those points. This could be in something as simple as a spreadsheet or within a clinical trial management system (CTMS) where that data is already collected. The key is to be consistent. Measure those same points for all the studies so you can see progress.
Another common metric to measure is clinical trial accrual (hopefully it increases over time). This doesn’t have a start and end point; however, it is still important to define what accrual means. Is it any person who joins a clinical trial? Only certain clinical trials? When do they “join”? Is it at the time the consent form is signed or when they go on protocol? These are all important to define so you can see if changes in clinical trial accrual exist. Also, when do you collect that accrual? Is it at the beginning of the year? The end of each month? The time must be consistent so you know the accrual amount is for the same period of time (e.g. calendar year to calendar year).
Other methods to determine performance metrics are to consider where the biggest issues in your program are that you want to correct. The same process applies:
- What is the goal?
- What are the start and end points? If it isn’t a process, and a single point (e.g. accrual), then what is that number and when is it collected?
- How and where is it being measured?
- How is it calculated/measured?
What do you need to start effort tracking and how do you ensure you collect the necessary information?
Similar to the performance metrics above, it is often helpful to identify your goal for implementing effort tracking. What information do you want to get out of effort tracking? What are you trying to achieve? Depending on the answers to those questions, you may find you can focus implementation on specific areas/protocols/tasks. Once those goals are identified, it is important to determine what people, protocols and tasks go into completing those items. This allows you to implement and work out some of the issues. People can get used to tracking effort and modifications can be made to maximize success. Then, effort tracking can be more widely implemented, either through additional goals, or in a more holistic approach (all effort for entire protocols and/or research teams).
Looking to begin effort tracking at your organization? Download our free eBook for best practices to get you started.
For example, let’s say your goal is to track how much time is spent recruiting and enrolling participants into clinical research. It’s an area that you feel is taking a lot of effort and that you aren’t getting paid enough for by sponsors. Perhaps your coordinators are reporting they are spending a lot of time recruiting, but with high screen fail rates, their work isn’t being recognized in the accrual numbers. So, perhaps you just implement effort tracking for all tasks related to recruiting and enrolling participants. Not every minute of every day is being reported. Not everyone is even using effort tracking in this example. But, you do get 100% of the data that you need to see how much time is being spent to recruit and enroll.
Now, how MUCH data do you collect?
I’m an outcomes researcher; I love data! That being said, only collect the level of data you need to answer your question. This helps with buy-in from the people that will be logging the data. If you make it easy for them to log, they understand the reason for the collection and they see the results of the data collected (and not mission-creep), you will have better quality of data (in both consistency and accuracy). So, determine what you need to make the analysis successful. In our example above, I would like to know the staff member’s name (likely important for all effort tracking ventures), the protocol number they are working on, the task they are doing (to see where the time is spent – is it records review or discussing the consent document), the date of the work being done and how much time was spent doing the task. Five fields total.
Put some thought into your field types.
You will need to analyze this data and the more systematic you make data collection, the less data cleaning you will need to do on the backend. The more thought you put in up front, the faster you will be able to analyze on the backend. This is where an effort tracking system (e.g. the types you see within a CTMS) is very helpful as much of this data formatting is done for you. What is the format for Protocol Number? Is it the IRB number, the sponsor protocol number or the NCT ID? Defining that up front will keep you from having to figure out whether you have data on one protocol or three. Additional considerations include:
- What are the tasks within recruitment/screening that you want to track?
- What if you set up the list and there are other tasks; how are those categorized (e.g. through an “other” option or using free text that you analyze later)?
- What about the time spent on a task? Do you expect to have staff log down to the exact minute or round to 5/10/15 minute increments?
- What about date format? That seems simple, but can really confuse an analysis quickly. (Same with staff name. I could put my name in as Wendy Tate, Tate Wendy, WT, and so on.)
Setting up a data dictionary is essential, especially if you are not using a system that does this for you (e.g. a CTMS). A data dictionary is a document that defines the field name, gives the definition/description of the field, clarifies the format of the field (e.g. a date in the format of mm/dd/yyyy), and provides any other information that will help your user easily fill out the information and your analyst know what to expect from the data.
What also needs to be considered is how often data is collected. How often do you expect staff to enter data?
- As the action happens? – Can be inefficient/inconvenient to the staff’s work but more accurate.
- Every day? – May not be as accurate, but is still timely.
- Every week? – More convenient, but may not be as detailed on a daily basis.
- Every month – Likely not accurate at the daily level, but can show larger patterns.
How often will you analyze the data and report out on it? All of these expectations should be defined early on to ensure consistency in the analysis as well as data quality.
Want more answers?
For more information on ways to address the industry’s top challenges, watch the free, on-demand recording of Wendy’s recent presentation, or download a copy of the 2017 State of the Clinical Research Industry Report.